00:00:00.000 Started by upstream project "autotest-per-patch" build number 132384 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.053 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.053 The recommended git tool is: git 00:00:00.053 using credential 00000000-0000-0000-0000-000000000002 00:00:00.055 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.126 Fetching changes from the remote Git repository 00:00:00.136 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.198 Using shallow fetch with depth 1 00:00:00.198 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.198 > git --version # timeout=10 00:00:00.260 > git --version # 'git version 2.39.2' 00:00:00.260 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.307 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.307 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.673 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.684 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.695 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:04.695 > git config core.sparsecheckout # timeout=10 00:00:04.707 > git read-tree -mu HEAD # timeout=10 00:00:04.722 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:04.743 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:04.743 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:04.888 [Pipeline] Start of Pipeline 00:00:04.903 [Pipeline] library 00:00:04.905 Loading library shm_lib@master 00:00:04.905 Library shm_lib@master is cached. Copying from home. 00:00:04.922 [Pipeline] node 00:00:04.931 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:00:04.933 [Pipeline] { 00:00:04.943 [Pipeline] catchError 00:00:04.944 [Pipeline] { 00:00:04.957 [Pipeline] wrap 00:00:04.965 [Pipeline] { 00:00:04.973 [Pipeline] stage 00:00:04.975 [Pipeline] { (Prologue) 00:00:04.994 [Pipeline] echo 00:00:04.995 Node: VM-host-SM9 00:00:05.002 [Pipeline] cleanWs 00:00:05.011 [WS-CLEANUP] Deleting project workspace... 00:00:05.012 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.019 [WS-CLEANUP] done 00:00:05.213 [Pipeline] setCustomBuildProperty 00:00:05.299 [Pipeline] httpRequest 00:00:05.699 [Pipeline] echo 00:00:05.701 Sorcerer 10.211.164.20 is alive 00:00:05.710 [Pipeline] retry 00:00:05.712 [Pipeline] { 00:00:05.726 [Pipeline] httpRequest 00:00:05.731 HttpMethod: GET 00:00:05.732 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.732 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.741 Response Code: HTTP/1.1 200 OK 00:00:05.741 Success: Status code 200 is in the accepted range: 200,404 00:00:05.742 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.939 [Pipeline] } 00:00:12.958 [Pipeline] // retry 00:00:12.966 [Pipeline] sh 00:00:13.251 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:13.267 [Pipeline] httpRequest 00:00:13.880 [Pipeline] echo 00:00:13.882 Sorcerer 10.211.164.20 is alive 00:00:13.889 [Pipeline] retry 00:00:13.891 [Pipeline] { 00:00:13.902 [Pipeline] httpRequest 00:00:13.906 HttpMethod: GET 00:00:13.907 URL: http://10.211.164.20/packages/spdk_557f022f641abf567fb02704f67857eb8f6d9ff3.tar.gz 00:00:13.907 Sending request to url: http://10.211.164.20/packages/spdk_557f022f641abf567fb02704f67857eb8f6d9ff3.tar.gz 00:00:13.922 Response Code: HTTP/1.1 200 OK 00:00:13.923 Success: Status code 200 is in the accepted range: 200,404 00:00:13.923 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk_557f022f641abf567fb02704f67857eb8f6d9ff3.tar.gz 00:02:56.814 [Pipeline] } 00:02:56.831 [Pipeline] // retry 00:02:56.839 [Pipeline] sh 00:02:57.124 + tar --no-same-owner -xf spdk_557f022f641abf567fb02704f67857eb8f6d9ff3.tar.gz 00:03:00.445 [Pipeline] sh 00:03:00.727 + git -C spdk log --oneline -n5 00:03:00.727 557f022f6 bdev: Change 1st parameter of bdev_bytes_to_blocks from bdev to desc 00:03:00.727 c0b2ac5c9 bdev: Change void to bdev_io pointer of parameter of _bdev_io_submit() 00:03:00.727 92fb22519 dif: dif_generate/verify_copy() supports NVMe PRACT = 1 and MD size > PI size 00:03:00.727 79daf868a dif: Add SPDK_DIF_FLAGS_NVME_PRACT for dif_generate/verify_copy() 00:03:00.727 431baf1b5 dif: Insert abstraction into dif_generate/verify_copy() for NVMe PRACT 00:03:00.746 [Pipeline] writeFile 00:03:00.761 [Pipeline] sh 00:03:01.047 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:03:01.059 [Pipeline] sh 00:03:01.340 + cat autorun-spdk.conf 00:03:01.340 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:01.340 SPDK_TEST_NVMF=1 00:03:01.340 SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:01.340 SPDK_TEST_USDT=1 00:03:01.340 SPDK_TEST_NVMF_MDNS=1 00:03:01.340 SPDK_RUN_UBSAN=1 00:03:01.340 NET_TYPE=virt 00:03:01.340 SPDK_JSONRPC_GO_CLIENT=1 00:03:01.340 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:01.347 RUN_NIGHTLY=0 00:03:01.349 [Pipeline] } 00:03:01.363 [Pipeline] // stage 00:03:01.379 [Pipeline] stage 00:03:01.381 [Pipeline] { (Run VM) 00:03:01.393 [Pipeline] sh 00:03:01.677 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:03:01.677 + echo 'Start stage prepare_nvme.sh' 00:03:01.677 Start stage prepare_nvme.sh 00:03:01.677 + [[ -n 1 ]] 00:03:01.677 + disk_prefix=ex1 00:03:01.677 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 ]] 00:03:01.677 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/autorun-spdk.conf ]] 00:03:01.677 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/autorun-spdk.conf 00:03:01.677 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:01.677 ++ SPDK_TEST_NVMF=1 00:03:01.677 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:01.677 ++ SPDK_TEST_USDT=1 00:03:01.677 ++ SPDK_TEST_NVMF_MDNS=1 00:03:01.677 ++ SPDK_RUN_UBSAN=1 00:03:01.677 ++ NET_TYPE=virt 00:03:01.677 ++ SPDK_JSONRPC_GO_CLIENT=1 00:03:01.677 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:01.677 ++ RUN_NIGHTLY=0 00:03:01.677 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:03:01.677 + nvme_files=() 00:03:01.677 + declare -A nvme_files 00:03:01.677 + backend_dir=/var/lib/libvirt/images/backends 00:03:01.677 + nvme_files['nvme.img']=5G 00:03:01.677 + nvme_files['nvme-cmb.img']=5G 00:03:01.677 + nvme_files['nvme-multi0.img']=4G 00:03:01.677 + nvme_files['nvme-multi1.img']=4G 00:03:01.677 + nvme_files['nvme-multi2.img']=4G 00:03:01.677 + nvme_files['nvme-openstack.img']=8G 00:03:01.677 + nvme_files['nvme-zns.img']=5G 00:03:01.677 + (( SPDK_TEST_NVME_PMR == 1 )) 00:03:01.677 + (( SPDK_TEST_FTL == 1 )) 00:03:01.677 + (( SPDK_TEST_NVME_FDP == 1 )) 00:03:01.677 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:03:01.677 + for nvme in "${!nvme_files[@]}" 00:03:01.677 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:03:01.677 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:03:01.677 + for nvme in "${!nvme_files[@]}" 00:03:01.677 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:03:01.677 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:03:01.677 + for nvme in "${!nvme_files[@]}" 00:03:01.677 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:03:01.677 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:03:01.677 + for nvme in "${!nvme_files[@]}" 00:03:01.677 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:03:01.677 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:03:01.677 + for nvme in "${!nvme_files[@]}" 00:03:01.677 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:03:01.677 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:03:01.677 + for nvme in "${!nvme_files[@]}" 00:03:01.677 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:03:01.936 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:03:01.936 + for nvme in "${!nvme_files[@]}" 00:03:01.936 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:03:01.936 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:03:01.936 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:03:01.936 + echo 'End stage prepare_nvme.sh' 00:03:01.936 End stage prepare_nvme.sh 00:03:01.947 [Pipeline] sh 00:03:02.228 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:03:02.228 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -H -a -v -f fedora39 00:03:02.228 00:03:02.228 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk/scripts/vagrant 00:03:02.228 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk 00:03:02.228 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:03:02.228 HELP=0 00:03:02.228 DRY_RUN=0 00:03:02.228 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img, 00:03:02.228 NVME_DISKS_TYPE=nvme,nvme, 00:03:02.228 NVME_AUTO_CREATE=0 00:03:02.228 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img, 00:03:02.228 NVME_CMB=,, 00:03:02.228 NVME_PMR=,, 00:03:02.228 NVME_ZNS=,, 00:03:02.228 NVME_MS=,, 00:03:02.228 NVME_FDP=,, 00:03:02.229 SPDK_VAGRANT_DISTRO=fedora39 00:03:02.229 SPDK_VAGRANT_VMCPU=10 00:03:02.229 SPDK_VAGRANT_VMRAM=12288 00:03:02.229 SPDK_VAGRANT_PROVIDER=libvirt 00:03:02.229 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:03:02.229 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:03:02.229 SPDK_OPENSTACK_NETWORK=0 00:03:02.229 VAGRANT_PACKAGE_BOX=0 00:03:02.229 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:03:02.229 FORCE_DISTRO=true 00:03:02.229 VAGRANT_BOX_VERSION= 00:03:02.229 EXTRA_VAGRANTFILES= 00:03:02.229 NIC_MODEL=e1000 00:03:02.229 00:03:02.229 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora39-libvirt' 00:03:02.229 /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:03:06.435 Bringing machine 'default' up with 'libvirt' provider... 00:03:06.435 ==> default: Creating image (snapshot of base box volume). 00:03:06.694 ==> default: Creating domain with the following settings... 00:03:06.694 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732101351_3649e61751c2ed8d606f 00:03:06.694 ==> default: -- Domain type: kvm 00:03:06.694 ==> default: -- Cpus: 10 00:03:06.694 ==> default: -- Feature: acpi 00:03:06.694 ==> default: -- Feature: apic 00:03:06.694 ==> default: -- Feature: pae 00:03:06.694 ==> default: -- Memory: 12288M 00:03:06.694 ==> default: -- Memory Backing: hugepages: 00:03:06.694 ==> default: -- Management MAC: 00:03:06.694 ==> default: -- Loader: 00:03:06.694 ==> default: -- Nvram: 00:03:06.694 ==> default: -- Base box: spdk/fedora39 00:03:06.694 ==> default: -- Storage pool: default 00:03:06.694 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732101351_3649e61751c2ed8d606f.img (20G) 00:03:06.694 ==> default: -- Volume Cache: default 00:03:06.694 ==> default: -- Kernel: 00:03:06.694 ==> default: -- Initrd: 00:03:06.694 ==> default: -- Graphics Type: vnc 00:03:06.694 ==> default: -- Graphics Port: -1 00:03:06.694 ==> default: -- Graphics IP: 127.0.0.1 00:03:06.694 ==> default: -- Graphics Password: Not defined 00:03:06.694 ==> default: -- Video Type: cirrus 00:03:06.694 ==> default: -- Video VRAM: 9216 00:03:06.694 ==> default: -- Sound Type: 00:03:06.694 ==> default: -- Keymap: en-us 00:03:06.694 ==> default: -- TPM Path: 00:03:06.694 ==> default: -- INPUT: type=mouse, bus=ps2 00:03:06.694 ==> default: -- Command line args: 00:03:06.694 ==> default: -> value=-device, 00:03:06.694 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:03:06.694 ==> default: -> value=-drive, 00:03:06.694 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-0-drive0, 00:03:06.694 ==> default: -> value=-device, 00:03:06.694 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:06.694 ==> default: -> value=-device, 00:03:06.694 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:03:06.694 ==> default: -> value=-drive, 00:03:06.694 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:03:06.694 ==> default: -> value=-device, 00:03:06.694 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:06.694 ==> default: -> value=-drive, 00:03:06.694 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:03:06.694 ==> default: -> value=-device, 00:03:06.694 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:06.694 ==> default: -> value=-drive, 00:03:06.694 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:03:06.694 ==> default: -> value=-device, 00:03:06.694 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:06.694 ==> default: Creating shared folders metadata... 00:03:06.694 ==> default: Starting domain. 00:03:08.072 ==> default: Waiting for domain to get an IP address... 00:03:26.158 ==> default: Waiting for SSH to become available... 00:03:26.158 ==> default: Configuring and enabling network interfaces... 00:03:29.440 default: SSH address: 192.168.121.165:22 00:03:29.440 default: SSH username: vagrant 00:03:29.440 default: SSH auth method: private key 00:03:31.338 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:03:39.444 ==> default: Mounting SSHFS shared folder... 00:03:40.011 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:03:40.270 ==> default: Checking Mount.. 00:03:41.648 ==> default: Folder Successfully Mounted! 00:03:41.648 ==> default: Running provisioner: file... 00:03:42.214 default: ~/.gitconfig => .gitconfig 00:03:42.473 00:03:42.473 SUCCESS! 00:03:42.473 00:03:42.473 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora39-libvirt and type "vagrant ssh" to use. 00:03:42.473 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:03:42.473 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora39-libvirt" to destroy all trace of vm. 00:03:42.473 00:03:42.482 [Pipeline] } 00:03:42.497 [Pipeline] // stage 00:03:42.506 [Pipeline] dir 00:03:42.506 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora39-libvirt 00:03:42.508 [Pipeline] { 00:03:42.520 [Pipeline] catchError 00:03:42.522 [Pipeline] { 00:03:42.534 [Pipeline] sh 00:03:42.812 + vagrant ssh-config --host vagrant 00:03:42.812 + sed -ne /^Host/,$p 00:03:42.812 + tee ssh_conf 00:03:47.010 Host vagrant 00:03:47.010 HostName 192.168.121.165 00:03:47.010 User vagrant 00:03:47.010 Port 22 00:03:47.010 UserKnownHostsFile /dev/null 00:03:47.010 StrictHostKeyChecking no 00:03:47.010 PasswordAuthentication no 00:03:47.010 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:03:47.010 IdentitiesOnly yes 00:03:47.010 LogLevel FATAL 00:03:47.010 ForwardAgent yes 00:03:47.010 ForwardX11 yes 00:03:47.010 00:03:47.025 [Pipeline] withEnv 00:03:47.028 [Pipeline] { 00:03:47.045 [Pipeline] sh 00:03:47.328 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:03:47.328 source /etc/os-release 00:03:47.328 [[ -e /image.version ]] && img=$(< /image.version) 00:03:47.328 # Minimal, systemd-like check. 00:03:47.328 if [[ -e /.dockerenv ]]; then 00:03:47.328 # Clear garbage from the node's name: 00:03:47.328 # agt-er_autotest_547-896 -> autotest_547-896 00:03:47.328 # $HOSTNAME is the actual container id 00:03:47.328 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:03:47.328 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:03:47.328 # We can assume this is a mount from a host where container is running, 00:03:47.328 # so fetch its hostname to easily identify the target swarm worker. 00:03:47.328 container="$(< /etc/hostname) ($agent)" 00:03:47.328 else 00:03:47.328 # Fallback 00:03:47.328 container=$agent 00:03:47.328 fi 00:03:47.328 fi 00:03:47.328 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:03:47.328 00:03:47.596 [Pipeline] } 00:03:47.614 [Pipeline] // withEnv 00:03:47.626 [Pipeline] setCustomBuildProperty 00:03:47.644 [Pipeline] stage 00:03:47.648 [Pipeline] { (Tests) 00:03:47.667 [Pipeline] sh 00:03:48.016 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:03:48.030 [Pipeline] sh 00:03:48.309 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:03:48.581 [Pipeline] timeout 00:03:48.581 Timeout set to expire in 1 hr 0 min 00:03:48.583 [Pipeline] { 00:03:48.597 [Pipeline] sh 00:03:48.876 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:03:49.441 HEAD is now at 557f022f6 bdev: Change 1st parameter of bdev_bytes_to_blocks from bdev to desc 00:03:49.452 [Pipeline] sh 00:03:49.730 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:03:49.999 [Pipeline] sh 00:03:50.275 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:03:50.546 [Pipeline] sh 00:03:50.824 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-vg-autotest ./autoruner.sh spdk_repo 00:03:51.084 ++ readlink -f spdk_repo 00:03:51.084 + DIR_ROOT=/home/vagrant/spdk_repo 00:03:51.084 + [[ -n /home/vagrant/spdk_repo ]] 00:03:51.084 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:03:51.084 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:03:51.084 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:03:51.084 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:03:51.084 + [[ -d /home/vagrant/spdk_repo/output ]] 00:03:51.084 + [[ nvmf-tcp-vg-autotest == pkgdep-* ]] 00:03:51.084 + cd /home/vagrant/spdk_repo 00:03:51.084 + source /etc/os-release 00:03:51.084 ++ NAME='Fedora Linux' 00:03:51.084 ++ VERSION='39 (Cloud Edition)' 00:03:51.084 ++ ID=fedora 00:03:51.084 ++ VERSION_ID=39 00:03:51.084 ++ VERSION_CODENAME= 00:03:51.084 ++ PLATFORM_ID=platform:f39 00:03:51.084 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:03:51.084 ++ ANSI_COLOR='0;38;2;60;110;180' 00:03:51.084 ++ LOGO=fedora-logo-icon 00:03:51.084 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:03:51.084 ++ HOME_URL=https://fedoraproject.org/ 00:03:51.084 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:03:51.084 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:03:51.084 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:03:51.084 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:03:51.084 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:03:51.084 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:03:51.084 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:03:51.084 ++ SUPPORT_END=2024-11-12 00:03:51.084 ++ VARIANT='Cloud Edition' 00:03:51.084 ++ VARIANT_ID=cloud 00:03:51.084 + uname -a 00:03:51.084 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:03:51.084 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:51.343 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:51.343 Hugepages 00:03:51.343 node hugesize free / total 00:03:51.601 node0 1048576kB 0 / 0 00:03:51.601 node0 2048kB 0 / 0 00:03:51.601 00:03:51.601 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:51.601 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:51.601 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:51.601 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:03:51.601 + rm -f /tmp/spdk-ld-path 00:03:51.601 + source autorun-spdk.conf 00:03:51.601 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:51.601 ++ SPDK_TEST_NVMF=1 00:03:51.601 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:51.601 ++ SPDK_TEST_USDT=1 00:03:51.601 ++ SPDK_TEST_NVMF_MDNS=1 00:03:51.601 ++ SPDK_RUN_UBSAN=1 00:03:51.601 ++ NET_TYPE=virt 00:03:51.601 ++ SPDK_JSONRPC_GO_CLIENT=1 00:03:51.601 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:51.601 ++ RUN_NIGHTLY=0 00:03:51.601 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:51.601 + [[ -n '' ]] 00:03:51.601 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:03:51.601 + for M in /var/spdk/build-*-manifest.txt 00:03:51.601 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:03:51.601 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:51.601 + for M in /var/spdk/build-*-manifest.txt 00:03:51.601 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:51.601 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:51.601 + for M in /var/spdk/build-*-manifest.txt 00:03:51.601 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:51.601 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:51.601 ++ uname 00:03:51.601 + [[ Linux == \L\i\n\u\x ]] 00:03:51.601 + sudo dmesg -T 00:03:51.601 + sudo dmesg --clear 00:03:51.601 + dmesg_pid=5259 00:03:51.601 + sudo dmesg -Tw 00:03:51.601 + [[ Fedora Linux == FreeBSD ]] 00:03:51.601 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:51.601 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:51.601 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:51.601 + [[ -x /usr/src/fio-static/fio ]] 00:03:51.601 + export FIO_BIN=/usr/src/fio-static/fio 00:03:51.601 + FIO_BIN=/usr/src/fio-static/fio 00:03:51.601 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:51.601 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:51.601 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:51.601 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:51.601 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:51.601 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:51.601 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:51.601 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:51.601 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:51.860 11:16:37 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:03:51.860 11:16:37 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:51.860 11:16:37 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:51.860 11:16:37 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:03:51.860 11:16:37 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:51.860 11:16:37 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_USDT=1 00:03:51.860 11:16:37 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_MDNS=1 00:03:51.860 11:16:37 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:03:51.860 11:16:37 -- spdk_repo/autorun-spdk.conf@7 -- $ NET_TYPE=virt 00:03:51.860 11:16:37 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_JSONRPC_GO_CLIENT=1 00:03:51.860 11:16:37 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:51.860 11:16:37 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:03:51.860 11:16:37 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:03:51.860 11:16:37 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:51.860 11:16:37 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:03:51.860 11:16:37 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:51.860 11:16:37 -- scripts/common.sh@15 -- $ shopt -s extglob 00:03:51.860 11:16:37 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:51.860 11:16:37 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:51.860 11:16:37 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:51.861 11:16:37 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:51.861 11:16:37 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:51.861 11:16:37 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:51.861 11:16:37 -- paths/export.sh@5 -- $ export PATH 00:03:51.861 11:16:37 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:51.861 11:16:37 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:03:51.861 11:16:37 -- common/autobuild_common.sh@493 -- $ date +%s 00:03:51.861 11:16:37 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732101397.XXXXXX 00:03:51.861 11:16:37 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732101397.m0xOev 00:03:51.861 11:16:37 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:03:51.861 11:16:37 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:03:51.861 11:16:37 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:03:51.861 11:16:37 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:03:51.861 11:16:37 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:03:51.861 11:16:37 -- common/autobuild_common.sh@509 -- $ get_config_params 00:03:51.861 11:16:37 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:03:51.861 11:16:37 -- common/autotest_common.sh@10 -- $ set +x 00:03:51.861 11:16:37 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang' 00:03:51.861 11:16:37 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:03:51.861 11:16:37 -- pm/common@17 -- $ local monitor 00:03:51.861 11:16:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:51.861 11:16:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:51.861 11:16:37 -- pm/common@25 -- $ sleep 1 00:03:51.861 11:16:37 -- pm/common@21 -- $ date +%s 00:03:51.861 11:16:37 -- pm/common@21 -- $ date +%s 00:03:51.861 11:16:37 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732101397 00:03:51.861 11:16:37 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732101397 00:03:51.861 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732101397_collect-vmstat.pm.log 00:03:51.861 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732101397_collect-cpu-load.pm.log 00:03:52.796 11:16:38 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:03:52.796 11:16:38 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:52.796 11:16:38 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:52.796 11:16:38 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:52.796 11:16:38 -- spdk/autobuild.sh@16 -- $ date -u 00:03:52.796 Wed Nov 20 11:16:38 AM UTC 2024 00:03:52.796 11:16:38 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:52.796 v25.01-pre-219-g557f022f6 00:03:52.796 11:16:38 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:03:52.796 11:16:38 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:52.796 11:16:38 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:52.796 11:16:38 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:52.796 11:16:38 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:52.796 11:16:38 -- common/autotest_common.sh@10 -- $ set +x 00:03:52.796 ************************************ 00:03:52.796 START TEST ubsan 00:03:52.796 ************************************ 00:03:52.796 using ubsan 00:03:52.796 11:16:38 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:03:52.796 00:03:52.796 real 0m0.000s 00:03:52.796 user 0m0.000s 00:03:52.796 sys 0m0.000s 00:03:52.796 11:16:38 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:52.796 11:16:38 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:52.796 ************************************ 00:03:52.796 END TEST ubsan 00:03:52.796 ************************************ 00:03:53.055 11:16:38 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:03:53.055 11:16:38 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:53.055 11:16:38 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:53.055 11:16:38 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:53.055 11:16:38 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:53.055 11:16:38 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:53.055 11:16:38 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:53.055 11:16:38 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:53.055 11:16:38 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang --with-shared 00:03:53.055 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:53.055 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:53.624 Using 'verbs' RDMA provider 00:04:06.769 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:04:21.643 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:04:21.643 go version go1.21.1 linux/amd64 00:04:21.643 Creating mk/config.mk...done. 00:04:21.643 Creating mk/cc.flags.mk...done. 00:04:21.643 Type 'make' to build. 00:04:21.643 11:17:05 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:04:21.643 11:17:05 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:21.643 11:17:05 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:21.643 11:17:05 -- common/autotest_common.sh@10 -- $ set +x 00:04:21.643 ************************************ 00:04:21.643 START TEST make 00:04:21.643 ************************************ 00:04:21.643 11:17:05 make -- common/autotest_common.sh@1129 -- $ make -j10 00:04:21.643 make[1]: Nothing to be done for 'all'. 00:04:36.519 The Meson build system 00:04:36.519 Version: 1.5.0 00:04:36.519 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:04:36.519 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:04:36.519 Build type: native build 00:04:36.519 Program cat found: YES (/usr/bin/cat) 00:04:36.519 Project name: DPDK 00:04:36.519 Project version: 24.03.0 00:04:36.519 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:36.519 C linker for the host machine: cc ld.bfd 2.40-14 00:04:36.519 Host machine cpu family: x86_64 00:04:36.519 Host machine cpu: x86_64 00:04:36.519 Message: ## Building in Developer Mode ## 00:04:36.519 Program pkg-config found: YES (/usr/bin/pkg-config) 00:04:36.519 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:04:36.519 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:04:36.519 Program python3 found: YES (/usr/bin/python3) 00:04:36.519 Program cat found: YES (/usr/bin/cat) 00:04:36.519 Compiler for C supports arguments -march=native: YES 00:04:36.519 Checking for size of "void *" : 8 00:04:36.519 Checking for size of "void *" : 8 (cached) 00:04:36.519 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:04:36.519 Library m found: YES 00:04:36.519 Library numa found: YES 00:04:36.519 Has header "numaif.h" : YES 00:04:36.519 Library fdt found: NO 00:04:36.519 Library execinfo found: NO 00:04:36.519 Has header "execinfo.h" : YES 00:04:36.519 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:36.519 Run-time dependency libarchive found: NO (tried pkgconfig) 00:04:36.519 Run-time dependency libbsd found: NO (tried pkgconfig) 00:04:36.519 Run-time dependency jansson found: NO (tried pkgconfig) 00:04:36.519 Run-time dependency openssl found: YES 3.1.1 00:04:36.519 Run-time dependency libpcap found: YES 1.10.4 00:04:36.519 Has header "pcap.h" with dependency libpcap: YES 00:04:36.519 Compiler for C supports arguments -Wcast-qual: YES 00:04:36.519 Compiler for C supports arguments -Wdeprecated: YES 00:04:36.519 Compiler for C supports arguments -Wformat: YES 00:04:36.519 Compiler for C supports arguments -Wformat-nonliteral: NO 00:04:36.519 Compiler for C supports arguments -Wformat-security: NO 00:04:36.519 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:36.519 Compiler for C supports arguments -Wmissing-prototypes: YES 00:04:36.519 Compiler for C supports arguments -Wnested-externs: YES 00:04:36.519 Compiler for C supports arguments -Wold-style-definition: YES 00:04:36.519 Compiler for C supports arguments -Wpointer-arith: YES 00:04:36.519 Compiler for C supports arguments -Wsign-compare: YES 00:04:36.519 Compiler for C supports arguments -Wstrict-prototypes: YES 00:04:36.519 Compiler for C supports arguments -Wundef: YES 00:04:36.519 Compiler for C supports arguments -Wwrite-strings: YES 00:04:36.519 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:04:36.519 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:04:36.519 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:36.519 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:04:36.519 Program objdump found: YES (/usr/bin/objdump) 00:04:36.519 Compiler for C supports arguments -mavx512f: YES 00:04:36.519 Checking if "AVX512 checking" compiles: YES 00:04:36.519 Fetching value of define "__SSE4_2__" : 1 00:04:36.519 Fetching value of define "__AES__" : 1 00:04:36.519 Fetching value of define "__AVX__" : 1 00:04:36.519 Fetching value of define "__AVX2__" : 1 00:04:36.519 Fetching value of define "__AVX512BW__" : (undefined) 00:04:36.519 Fetching value of define "__AVX512CD__" : (undefined) 00:04:36.519 Fetching value of define "__AVX512DQ__" : (undefined) 00:04:36.519 Fetching value of define "__AVX512F__" : (undefined) 00:04:36.519 Fetching value of define "__AVX512VL__" : (undefined) 00:04:36.519 Fetching value of define "__PCLMUL__" : 1 00:04:36.519 Fetching value of define "__RDRND__" : 1 00:04:36.519 Fetching value of define "__RDSEED__" : 1 00:04:36.519 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:04:36.519 Fetching value of define "__znver1__" : (undefined) 00:04:36.519 Fetching value of define "__znver2__" : (undefined) 00:04:36.519 Fetching value of define "__znver3__" : (undefined) 00:04:36.519 Fetching value of define "__znver4__" : (undefined) 00:04:36.519 Compiler for C supports arguments -Wno-format-truncation: YES 00:04:36.519 Message: lib/log: Defining dependency "log" 00:04:36.519 Message: lib/kvargs: Defining dependency "kvargs" 00:04:36.519 Message: lib/telemetry: Defining dependency "telemetry" 00:04:36.519 Checking for function "getentropy" : NO 00:04:36.519 Message: lib/eal: Defining dependency "eal" 00:04:36.519 Message: lib/ring: Defining dependency "ring" 00:04:36.519 Message: lib/rcu: Defining dependency "rcu" 00:04:36.519 Message: lib/mempool: Defining dependency "mempool" 00:04:36.519 Message: lib/mbuf: Defining dependency "mbuf" 00:04:36.519 Fetching value of define "__PCLMUL__" : 1 (cached) 00:04:36.519 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:04:36.519 Compiler for C supports arguments -mpclmul: YES 00:04:36.519 Compiler for C supports arguments -maes: YES 00:04:36.519 Compiler for C supports arguments -mavx512f: YES (cached) 00:04:36.519 Compiler for C supports arguments -mavx512bw: YES 00:04:36.519 Compiler for C supports arguments -mavx512dq: YES 00:04:36.519 Compiler for C supports arguments -mavx512vl: YES 00:04:36.519 Compiler for C supports arguments -mvpclmulqdq: YES 00:04:36.519 Compiler for C supports arguments -mavx2: YES 00:04:36.519 Compiler for C supports arguments -mavx: YES 00:04:36.519 Message: lib/net: Defining dependency "net" 00:04:36.519 Message: lib/meter: Defining dependency "meter" 00:04:36.519 Message: lib/ethdev: Defining dependency "ethdev" 00:04:36.519 Message: lib/pci: Defining dependency "pci" 00:04:36.520 Message: lib/cmdline: Defining dependency "cmdline" 00:04:36.520 Message: lib/hash: Defining dependency "hash" 00:04:36.520 Message: lib/timer: Defining dependency "timer" 00:04:36.520 Message: lib/compressdev: Defining dependency "compressdev" 00:04:36.520 Message: lib/cryptodev: Defining dependency "cryptodev" 00:04:36.520 Message: lib/dmadev: Defining dependency "dmadev" 00:04:36.520 Compiler for C supports arguments -Wno-cast-qual: YES 00:04:36.520 Message: lib/power: Defining dependency "power" 00:04:36.520 Message: lib/reorder: Defining dependency "reorder" 00:04:36.520 Message: lib/security: Defining dependency "security" 00:04:36.520 Has header "linux/userfaultfd.h" : YES 00:04:36.520 Has header "linux/vduse.h" : YES 00:04:36.520 Message: lib/vhost: Defining dependency "vhost" 00:04:36.520 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:04:36.520 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:04:36.520 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:04:36.520 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:04:36.520 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:04:36.520 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:04:36.520 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:04:36.520 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:04:36.520 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:04:36.520 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:04:36.520 Program doxygen found: YES (/usr/local/bin/doxygen) 00:04:36.520 Configuring doxy-api-html.conf using configuration 00:04:36.520 Configuring doxy-api-man.conf using configuration 00:04:36.520 Program mandb found: YES (/usr/bin/mandb) 00:04:36.520 Program sphinx-build found: NO 00:04:36.520 Configuring rte_build_config.h using configuration 00:04:36.520 Message: 00:04:36.520 ================= 00:04:36.520 Applications Enabled 00:04:36.520 ================= 00:04:36.520 00:04:36.520 apps: 00:04:36.520 00:04:36.520 00:04:36.520 Message: 00:04:36.520 ================= 00:04:36.520 Libraries Enabled 00:04:36.520 ================= 00:04:36.520 00:04:36.520 libs: 00:04:36.520 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:04:36.520 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:04:36.520 cryptodev, dmadev, power, reorder, security, vhost, 00:04:36.520 00:04:36.520 Message: 00:04:36.520 =============== 00:04:36.520 Drivers Enabled 00:04:36.520 =============== 00:04:36.520 00:04:36.520 common: 00:04:36.520 00:04:36.520 bus: 00:04:36.520 pci, vdev, 00:04:36.520 mempool: 00:04:36.520 ring, 00:04:36.520 dma: 00:04:36.520 00:04:36.520 net: 00:04:36.520 00:04:36.520 crypto: 00:04:36.520 00:04:36.520 compress: 00:04:36.520 00:04:36.520 vdpa: 00:04:36.520 00:04:36.520 00:04:36.520 Message: 00:04:36.520 ================= 00:04:36.520 Content Skipped 00:04:36.520 ================= 00:04:36.520 00:04:36.520 apps: 00:04:36.520 dumpcap: explicitly disabled via build config 00:04:36.520 graph: explicitly disabled via build config 00:04:36.520 pdump: explicitly disabled via build config 00:04:36.520 proc-info: explicitly disabled via build config 00:04:36.520 test-acl: explicitly disabled via build config 00:04:36.520 test-bbdev: explicitly disabled via build config 00:04:36.520 test-cmdline: explicitly disabled via build config 00:04:36.520 test-compress-perf: explicitly disabled via build config 00:04:36.520 test-crypto-perf: explicitly disabled via build config 00:04:36.520 test-dma-perf: explicitly disabled via build config 00:04:36.520 test-eventdev: explicitly disabled via build config 00:04:36.520 test-fib: explicitly disabled via build config 00:04:36.520 test-flow-perf: explicitly disabled via build config 00:04:36.520 test-gpudev: explicitly disabled via build config 00:04:36.520 test-mldev: explicitly disabled via build config 00:04:36.520 test-pipeline: explicitly disabled via build config 00:04:36.520 test-pmd: explicitly disabled via build config 00:04:36.520 test-regex: explicitly disabled via build config 00:04:36.520 test-sad: explicitly disabled via build config 00:04:36.520 test-security-perf: explicitly disabled via build config 00:04:36.520 00:04:36.520 libs: 00:04:36.520 argparse: explicitly disabled via build config 00:04:36.520 metrics: explicitly disabled via build config 00:04:36.520 acl: explicitly disabled via build config 00:04:36.520 bbdev: explicitly disabled via build config 00:04:36.520 bitratestats: explicitly disabled via build config 00:04:36.520 bpf: explicitly disabled via build config 00:04:36.520 cfgfile: explicitly disabled via build config 00:04:36.520 distributor: explicitly disabled via build config 00:04:36.520 efd: explicitly disabled via build config 00:04:36.520 eventdev: explicitly disabled via build config 00:04:36.520 dispatcher: explicitly disabled via build config 00:04:36.520 gpudev: explicitly disabled via build config 00:04:36.520 gro: explicitly disabled via build config 00:04:36.520 gso: explicitly disabled via build config 00:04:36.520 ip_frag: explicitly disabled via build config 00:04:36.520 jobstats: explicitly disabled via build config 00:04:36.520 latencystats: explicitly disabled via build config 00:04:36.520 lpm: explicitly disabled via build config 00:04:36.520 member: explicitly disabled via build config 00:04:36.520 pcapng: explicitly disabled via build config 00:04:36.520 rawdev: explicitly disabled via build config 00:04:36.520 regexdev: explicitly disabled via build config 00:04:36.520 mldev: explicitly disabled via build config 00:04:36.520 rib: explicitly disabled via build config 00:04:36.520 sched: explicitly disabled via build config 00:04:36.520 stack: explicitly disabled via build config 00:04:36.520 ipsec: explicitly disabled via build config 00:04:36.520 pdcp: explicitly disabled via build config 00:04:36.520 fib: explicitly disabled via build config 00:04:36.520 port: explicitly disabled via build config 00:04:36.520 pdump: explicitly disabled via build config 00:04:36.520 table: explicitly disabled via build config 00:04:36.520 pipeline: explicitly disabled via build config 00:04:36.520 graph: explicitly disabled via build config 00:04:36.520 node: explicitly disabled via build config 00:04:36.520 00:04:36.520 drivers: 00:04:36.520 common/cpt: not in enabled drivers build config 00:04:36.520 common/dpaax: not in enabled drivers build config 00:04:36.520 common/iavf: not in enabled drivers build config 00:04:36.520 common/idpf: not in enabled drivers build config 00:04:36.520 common/ionic: not in enabled drivers build config 00:04:36.520 common/mvep: not in enabled drivers build config 00:04:36.520 common/octeontx: not in enabled drivers build config 00:04:36.520 bus/auxiliary: not in enabled drivers build config 00:04:36.520 bus/cdx: not in enabled drivers build config 00:04:36.520 bus/dpaa: not in enabled drivers build config 00:04:36.520 bus/fslmc: not in enabled drivers build config 00:04:36.520 bus/ifpga: not in enabled drivers build config 00:04:36.520 bus/platform: not in enabled drivers build config 00:04:36.520 bus/uacce: not in enabled drivers build config 00:04:36.520 bus/vmbus: not in enabled drivers build config 00:04:36.520 common/cnxk: not in enabled drivers build config 00:04:36.520 common/mlx5: not in enabled drivers build config 00:04:36.520 common/nfp: not in enabled drivers build config 00:04:36.520 common/nitrox: not in enabled drivers build config 00:04:36.520 common/qat: not in enabled drivers build config 00:04:36.520 common/sfc_efx: not in enabled drivers build config 00:04:36.520 mempool/bucket: not in enabled drivers build config 00:04:36.520 mempool/cnxk: not in enabled drivers build config 00:04:36.520 mempool/dpaa: not in enabled drivers build config 00:04:36.520 mempool/dpaa2: not in enabled drivers build config 00:04:36.520 mempool/octeontx: not in enabled drivers build config 00:04:36.520 mempool/stack: not in enabled drivers build config 00:04:36.520 dma/cnxk: not in enabled drivers build config 00:04:36.520 dma/dpaa: not in enabled drivers build config 00:04:36.520 dma/dpaa2: not in enabled drivers build config 00:04:36.520 dma/hisilicon: not in enabled drivers build config 00:04:36.520 dma/idxd: not in enabled drivers build config 00:04:36.520 dma/ioat: not in enabled drivers build config 00:04:36.520 dma/skeleton: not in enabled drivers build config 00:04:36.520 net/af_packet: not in enabled drivers build config 00:04:36.520 net/af_xdp: not in enabled drivers build config 00:04:36.520 net/ark: not in enabled drivers build config 00:04:36.520 net/atlantic: not in enabled drivers build config 00:04:36.520 net/avp: not in enabled drivers build config 00:04:36.520 net/axgbe: not in enabled drivers build config 00:04:36.520 net/bnx2x: not in enabled drivers build config 00:04:36.520 net/bnxt: not in enabled drivers build config 00:04:36.520 net/bonding: not in enabled drivers build config 00:04:36.520 net/cnxk: not in enabled drivers build config 00:04:36.520 net/cpfl: not in enabled drivers build config 00:04:36.520 net/cxgbe: not in enabled drivers build config 00:04:36.520 net/dpaa: not in enabled drivers build config 00:04:36.520 net/dpaa2: not in enabled drivers build config 00:04:36.520 net/e1000: not in enabled drivers build config 00:04:36.520 net/ena: not in enabled drivers build config 00:04:36.520 net/enetc: not in enabled drivers build config 00:04:36.520 net/enetfec: not in enabled drivers build config 00:04:36.520 net/enic: not in enabled drivers build config 00:04:36.520 net/failsafe: not in enabled drivers build config 00:04:36.520 net/fm10k: not in enabled drivers build config 00:04:36.520 net/gve: not in enabled drivers build config 00:04:36.520 net/hinic: not in enabled drivers build config 00:04:36.520 net/hns3: not in enabled drivers build config 00:04:36.520 net/i40e: not in enabled drivers build config 00:04:36.520 net/iavf: not in enabled drivers build config 00:04:36.520 net/ice: not in enabled drivers build config 00:04:36.520 net/idpf: not in enabled drivers build config 00:04:36.520 net/igc: not in enabled drivers build config 00:04:36.520 net/ionic: not in enabled drivers build config 00:04:36.520 net/ipn3ke: not in enabled drivers build config 00:04:36.520 net/ixgbe: not in enabled drivers build config 00:04:36.520 net/mana: not in enabled drivers build config 00:04:36.521 net/memif: not in enabled drivers build config 00:04:36.521 net/mlx4: not in enabled drivers build config 00:04:36.521 net/mlx5: not in enabled drivers build config 00:04:36.521 net/mvneta: not in enabled drivers build config 00:04:36.521 net/mvpp2: not in enabled drivers build config 00:04:36.521 net/netvsc: not in enabled drivers build config 00:04:36.521 net/nfb: not in enabled drivers build config 00:04:36.521 net/nfp: not in enabled drivers build config 00:04:36.521 net/ngbe: not in enabled drivers build config 00:04:36.521 net/null: not in enabled drivers build config 00:04:36.521 net/octeontx: not in enabled drivers build config 00:04:36.521 net/octeon_ep: not in enabled drivers build config 00:04:36.521 net/pcap: not in enabled drivers build config 00:04:36.521 net/pfe: not in enabled drivers build config 00:04:36.521 net/qede: not in enabled drivers build config 00:04:36.521 net/ring: not in enabled drivers build config 00:04:36.521 net/sfc: not in enabled drivers build config 00:04:36.521 net/softnic: not in enabled drivers build config 00:04:36.521 net/tap: not in enabled drivers build config 00:04:36.521 net/thunderx: not in enabled drivers build config 00:04:36.521 net/txgbe: not in enabled drivers build config 00:04:36.521 net/vdev_netvsc: not in enabled drivers build config 00:04:36.521 net/vhost: not in enabled drivers build config 00:04:36.521 net/virtio: not in enabled drivers build config 00:04:36.521 net/vmxnet3: not in enabled drivers build config 00:04:36.521 raw/*: missing internal dependency, "rawdev" 00:04:36.521 crypto/armv8: not in enabled drivers build config 00:04:36.521 crypto/bcmfs: not in enabled drivers build config 00:04:36.521 crypto/caam_jr: not in enabled drivers build config 00:04:36.521 crypto/ccp: not in enabled drivers build config 00:04:36.521 crypto/cnxk: not in enabled drivers build config 00:04:36.521 crypto/dpaa_sec: not in enabled drivers build config 00:04:36.521 crypto/dpaa2_sec: not in enabled drivers build config 00:04:36.521 crypto/ipsec_mb: not in enabled drivers build config 00:04:36.521 crypto/mlx5: not in enabled drivers build config 00:04:36.521 crypto/mvsam: not in enabled drivers build config 00:04:36.521 crypto/nitrox: not in enabled drivers build config 00:04:36.521 crypto/null: not in enabled drivers build config 00:04:36.521 crypto/octeontx: not in enabled drivers build config 00:04:36.521 crypto/openssl: not in enabled drivers build config 00:04:36.521 crypto/scheduler: not in enabled drivers build config 00:04:36.521 crypto/uadk: not in enabled drivers build config 00:04:36.521 crypto/virtio: not in enabled drivers build config 00:04:36.521 compress/isal: not in enabled drivers build config 00:04:36.521 compress/mlx5: not in enabled drivers build config 00:04:36.521 compress/nitrox: not in enabled drivers build config 00:04:36.521 compress/octeontx: not in enabled drivers build config 00:04:36.521 compress/zlib: not in enabled drivers build config 00:04:36.521 regex/*: missing internal dependency, "regexdev" 00:04:36.521 ml/*: missing internal dependency, "mldev" 00:04:36.521 vdpa/ifc: not in enabled drivers build config 00:04:36.521 vdpa/mlx5: not in enabled drivers build config 00:04:36.521 vdpa/nfp: not in enabled drivers build config 00:04:36.521 vdpa/sfc: not in enabled drivers build config 00:04:36.521 event/*: missing internal dependency, "eventdev" 00:04:36.521 baseband/*: missing internal dependency, "bbdev" 00:04:36.521 gpu/*: missing internal dependency, "gpudev" 00:04:36.521 00:04:36.521 00:04:36.521 Build targets in project: 85 00:04:36.521 00:04:36.521 DPDK 24.03.0 00:04:36.521 00:04:36.521 User defined options 00:04:36.521 buildtype : debug 00:04:36.521 default_library : shared 00:04:36.521 libdir : lib 00:04:36.521 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:04:36.521 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:04:36.521 c_link_args : 00:04:36.521 cpu_instruction_set: native 00:04:36.521 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:04:36.521 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:04:36.521 enable_docs : false 00:04:36.521 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:04:36.521 enable_kmods : false 00:04:36.521 max_lcores : 128 00:04:36.521 tests : false 00:04:36.521 00:04:36.521 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:36.521 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:04:36.521 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:04:36.521 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:04:36.521 [3/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:04:36.521 [4/268] Linking static target lib/librte_kvargs.a 00:04:36.521 [5/268] Linking static target lib/librte_log.a 00:04:36.521 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:04:36.779 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:04:36.779 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:04:36.779 [9/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:04:37.037 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:04:37.037 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:04:37.037 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:04:37.037 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:04:37.037 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:04:37.037 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:04:37.037 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:04:37.037 [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:04:37.037 [18/268] Linking static target lib/librte_telemetry.a 00:04:37.295 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:04:37.295 [20/268] Linking target lib/librte_log.so.24.1 00:04:37.552 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:04:37.552 [22/268] Linking target lib/librte_kvargs.so.24.1 00:04:37.810 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:04:37.810 [24/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:04:37.810 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:04:38.068 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:04:38.068 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:04:38.068 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:04:38.068 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:04:38.068 [30/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:04:38.068 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:04:38.068 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:04:38.068 [33/268] Linking target lib/librte_telemetry.so.24.1 00:04:38.327 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:04:38.327 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:04:38.327 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:04:38.584 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:04:38.842 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:04:38.842 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:04:38.842 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:04:39.101 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:04:39.101 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:04:39.101 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:04:39.101 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:04:39.101 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:04:39.101 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:04:39.101 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:04:39.359 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:04:39.359 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:04:39.359 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:04:39.617 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:04:39.875 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:04:39.875 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:04:39.875 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:04:39.875 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:04:40.133 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:04:40.133 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:04:40.133 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:04:40.133 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:04:40.391 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:04:40.391 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:04:40.649 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:04:40.649 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:04:40.649 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:04:40.907 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:04:40.907 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:04:41.176 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:04:41.177 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:04:41.177 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:04:41.177 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:04:41.438 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:04:41.438 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:04:41.438 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:04:41.438 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:04:41.438 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:04:41.438 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:04:41.696 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:04:41.696 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:04:41.696 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:04:41.696 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:04:41.696 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:04:41.955 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:04:41.955 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:04:41.955 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:04:41.955 [85/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:04:41.955 [86/268] Linking static target lib/librte_eal.a 00:04:41.955 [87/268] Linking static target lib/librte_ring.a 00:04:42.212 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:04:42.470 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:04:42.470 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:04:42.470 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:04:42.470 [92/268] Linking static target lib/librte_mempool.a 00:04:42.728 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:04:42.728 [94/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:04:42.728 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:04:42.728 [96/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:04:42.728 [97/268] Linking static target lib/librte_rcu.a 00:04:42.728 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:04:42.728 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:04:42.728 [100/268] Linking static target lib/librte_mbuf.a 00:04:42.986 [101/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:04:42.986 [102/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:04:43.243 [103/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:04:43.243 [104/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:04:43.243 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:04:43.574 [106/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:04:43.574 [107/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:04:43.574 [108/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:04:43.574 [109/268] Linking static target lib/librte_net.a 00:04:43.574 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:04:43.574 [111/268] Linking static target lib/librte_meter.a 00:04:43.839 [112/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:04:43.839 [113/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:04:43.839 [114/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:04:44.096 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:04:44.096 [116/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:04:44.096 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:04:44.354 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:04:44.354 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:04:44.613 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:04:44.872 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:04:44.872 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:04:45.131 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:04:45.131 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:04:45.131 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:04:45.388 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:04:45.388 [127/268] Linking static target lib/librte_pci.a 00:04:45.388 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:04:45.388 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:04:45.388 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:04:45.647 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:04:45.647 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:04:45.647 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:04:45.647 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:04:45.647 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:04:45.647 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:04:45.647 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:04:45.647 [138/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:45.647 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:04:45.905 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:04:45.905 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:04:45.905 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:04:45.906 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:04:45.906 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:04:45.906 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:04:45.906 [146/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:04:45.906 [147/268] Linking static target lib/librte_ethdev.a 00:04:46.473 [148/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:04:46.473 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:04:46.473 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:04:46.473 [151/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:04:46.473 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:04:46.473 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:04:46.473 [154/268] Linking static target lib/librte_cmdline.a 00:04:46.473 [155/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:04:46.731 [156/268] Linking static target lib/librte_timer.a 00:04:47.296 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:04:47.296 [158/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:04:47.296 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:04:47.296 [160/268] Linking static target lib/librte_hash.a 00:04:47.296 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:04:47.296 [162/268] Linking static target lib/librte_compressdev.a 00:04:47.296 [163/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:04:47.296 [164/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:04:47.296 [165/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:04:47.861 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:04:47.861 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:04:47.861 [168/268] Linking static target lib/librte_dmadev.a 00:04:47.861 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:04:47.861 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:04:47.861 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:04:48.119 [172/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:04:48.119 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:04:48.119 [174/268] Linking static target lib/librte_cryptodev.a 00:04:48.119 [175/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:48.120 [176/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:04:48.377 [177/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:04:48.636 [178/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:04:48.636 [179/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:04:48.636 [180/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:04:48.636 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:04:48.894 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:04:48.894 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:04:48.894 [184/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:49.152 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:04:49.409 [186/268] Linking static target lib/librte_power.a 00:04:49.409 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:04:49.409 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:04:49.409 [189/268] Linking static target lib/librte_reorder.a 00:04:49.664 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:04:49.664 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:04:49.664 [192/268] Linking static target lib/librte_security.a 00:04:49.920 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:04:49.920 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:04:50.177 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:04:50.742 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:04:50.742 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:04:50.742 [198/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:04:50.742 [199/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:50.742 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:04:50.742 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:04:50.998 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:04:51.254 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:04:51.254 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:04:51.511 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:04:51.511 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:04:51.511 [207/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:04:51.511 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:04:51.769 [209/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:04:51.769 [210/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:04:51.769 [211/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:04:51.769 [212/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:04:52.029 [213/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:04:52.029 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:52.029 [215/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:52.029 [216/268] Linking static target drivers/librte_bus_vdev.a 00:04:52.029 [217/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:04:52.029 [218/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:04:52.029 [219/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:04:52.029 [220/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:52.029 [221/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:52.029 [222/268] Linking static target drivers/librte_bus_pci.a 00:04:52.306 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:04:52.307 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:52.307 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:52.307 [226/268] Linking static target drivers/librte_mempool_ring.a 00:04:52.307 [227/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:52.565 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:53.132 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:04:53.133 [230/268] Linking static target lib/librte_vhost.a 00:04:54.068 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:04:54.068 [232/268] Linking target lib/librte_eal.so.24.1 00:04:54.068 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:04:54.068 [234/268] Linking target lib/librte_meter.so.24.1 00:04:54.068 [235/268] Linking target lib/librte_dmadev.so.24.1 00:04:54.068 [236/268] Linking target lib/librte_pci.so.24.1 00:04:54.068 [237/268] Linking target lib/librte_timer.so.24.1 00:04:54.068 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:04:54.068 [239/268] Linking target lib/librte_ring.so.24.1 00:04:54.326 [240/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:04:54.326 [241/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:04:54.326 [242/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:54.326 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:04:54.326 [244/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:04:54.326 [245/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:04:54.326 [246/268] Linking target lib/librte_rcu.so.24.1 00:04:54.326 [247/268] Linking target lib/librte_mempool.so.24.1 00:04:54.326 [248/268] Linking target drivers/librte_bus_pci.so.24.1 00:04:54.585 [249/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:04:54.585 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:04:54.585 [251/268] Linking target drivers/librte_mempool_ring.so.24.1 00:04:54.585 [252/268] Linking target lib/librte_mbuf.so.24.1 00:04:54.585 [253/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:04:54.585 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:04:54.843 [255/268] Linking target lib/librte_net.so.24.1 00:04:54.843 [256/268] Linking target lib/librte_reorder.so.24.1 00:04:54.843 [257/268] Linking target lib/librte_compressdev.so.24.1 00:04:54.843 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:04:54.843 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:04:54.843 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:04:54.843 [261/268] Linking target lib/librte_hash.so.24.1 00:04:54.843 [262/268] Linking target lib/librte_cmdline.so.24.1 00:04:55.102 [263/268] Linking target lib/librte_security.so.24.1 00:04:55.102 [264/268] Linking target lib/librte_ethdev.so.24.1 00:04:55.102 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:04:55.102 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:04:55.102 [267/268] Linking target lib/librte_power.so.24.1 00:04:55.102 [268/268] Linking target lib/librte_vhost.so.24.1 00:04:55.102 INFO: autodetecting backend as ninja 00:04:55.102 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:05:21.702 CC lib/log/log_flags.o 00:05:21.702 CC lib/log/log.o 00:05:21.702 CC lib/log/log_deprecated.o 00:05:21.702 CC lib/ut_mock/mock.o 00:05:21.702 CC lib/ut/ut.o 00:05:21.702 LIB libspdk_ut_mock.a 00:05:21.702 SO libspdk_ut_mock.so.6.0 00:05:21.702 LIB libspdk_log.a 00:05:21.702 LIB libspdk_ut.a 00:05:21.702 SYMLINK libspdk_ut_mock.so 00:05:21.702 SO libspdk_log.so.7.1 00:05:21.702 SO libspdk_ut.so.2.0 00:05:21.702 SYMLINK libspdk_log.so 00:05:21.702 SYMLINK libspdk_ut.so 00:05:21.702 CC lib/util/base64.o 00:05:21.702 CC lib/util/cpuset.o 00:05:21.702 CC lib/util/bit_array.o 00:05:21.702 CC lib/util/crc32.o 00:05:21.702 CC lib/util/crc32c.o 00:05:21.702 CC lib/util/crc16.o 00:05:21.702 CXX lib/trace_parser/trace.o 00:05:21.702 CC lib/dma/dma.o 00:05:21.702 CC lib/ioat/ioat.o 00:05:21.702 CC lib/vfio_user/host/vfio_user_pci.o 00:05:21.702 CC lib/util/crc32_ieee.o 00:05:21.702 CC lib/util/crc64.o 00:05:21.702 CC lib/util/dif.o 00:05:21.702 CC lib/util/fd.o 00:05:21.702 LIB libspdk_dma.a 00:05:21.702 CC lib/util/fd_group.o 00:05:21.702 CC lib/util/file.o 00:05:21.702 SO libspdk_dma.so.5.0 00:05:21.702 CC lib/vfio_user/host/vfio_user.o 00:05:21.702 CC lib/util/hexlify.o 00:05:21.702 SYMLINK libspdk_dma.so 00:05:21.702 CC lib/util/iov.o 00:05:21.702 LIB libspdk_ioat.a 00:05:21.702 CC lib/util/math.o 00:05:21.702 SO libspdk_ioat.so.7.0 00:05:21.702 CC lib/util/net.o 00:05:21.702 SYMLINK libspdk_ioat.so 00:05:21.702 CC lib/util/pipe.o 00:05:21.702 CC lib/util/strerror_tls.o 00:05:21.702 CC lib/util/string.o 00:05:21.702 CC lib/util/uuid.o 00:05:21.702 LIB libspdk_vfio_user.a 00:05:21.702 CC lib/util/xor.o 00:05:21.702 SO libspdk_vfio_user.so.5.0 00:05:21.702 CC lib/util/zipf.o 00:05:21.702 CC lib/util/md5.o 00:05:21.702 SYMLINK libspdk_vfio_user.so 00:05:21.702 LIB libspdk_util.a 00:05:21.702 SO libspdk_util.so.10.1 00:05:21.702 LIB libspdk_trace_parser.a 00:05:21.702 SYMLINK libspdk_util.so 00:05:21.702 SO libspdk_trace_parser.so.6.0 00:05:21.962 CC lib/json/json_parse.o 00:05:21.962 CC lib/json/json_util.o 00:05:21.962 CC lib/json/json_write.o 00:05:21.962 SYMLINK libspdk_trace_parser.so 00:05:21.962 CC lib/idxd/idxd.o 00:05:21.962 CC lib/idxd/idxd_user.o 00:05:21.962 CC lib/conf/conf.o 00:05:21.962 CC lib/idxd/idxd_kernel.o 00:05:21.962 CC lib/rdma_utils/rdma_utils.o 00:05:21.962 CC lib/env_dpdk/env.o 00:05:21.962 CC lib/vmd/vmd.o 00:05:22.219 CC lib/vmd/led.o 00:05:22.219 CC lib/env_dpdk/memory.o 00:05:22.219 CC lib/env_dpdk/pci.o 00:05:22.219 CC lib/env_dpdk/init.o 00:05:22.219 LIB libspdk_json.a 00:05:22.219 LIB libspdk_rdma_utils.a 00:05:22.477 LIB libspdk_conf.a 00:05:22.477 SO libspdk_rdma_utils.so.1.0 00:05:22.477 SO libspdk_json.so.6.0 00:05:22.477 SO libspdk_conf.so.6.0 00:05:22.477 CC lib/env_dpdk/threads.o 00:05:22.477 SYMLINK libspdk_rdma_utils.so 00:05:22.477 CC lib/env_dpdk/pci_ioat.o 00:05:22.477 SYMLINK libspdk_json.so 00:05:22.477 SYMLINK libspdk_conf.so 00:05:22.477 CC lib/env_dpdk/pci_virtio.o 00:05:22.477 CC lib/env_dpdk/pci_vmd.o 00:05:22.735 CC lib/env_dpdk/pci_idxd.o 00:05:22.735 CC lib/rdma_provider/common.o 00:05:22.735 CC lib/env_dpdk/pci_event.o 00:05:22.735 LIB libspdk_idxd.a 00:05:22.735 CC lib/env_dpdk/sigbus_handler.o 00:05:22.735 SO libspdk_idxd.so.12.1 00:05:22.735 LIB libspdk_vmd.a 00:05:22.735 CC lib/jsonrpc/jsonrpc_server.o 00:05:22.735 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:05:22.735 SO libspdk_vmd.so.6.0 00:05:22.735 CC lib/env_dpdk/pci_dpdk.o 00:05:22.735 SYMLINK libspdk_idxd.so 00:05:22.735 CC lib/rdma_provider/rdma_provider_verbs.o 00:05:22.735 CC lib/env_dpdk/pci_dpdk_2207.o 00:05:22.735 CC lib/env_dpdk/pci_dpdk_2211.o 00:05:22.993 SYMLINK libspdk_vmd.so 00:05:22.993 CC lib/jsonrpc/jsonrpc_client.o 00:05:22.993 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:05:22.993 LIB libspdk_rdma_provider.a 00:05:22.993 SO libspdk_rdma_provider.so.7.0 00:05:23.252 LIB libspdk_jsonrpc.a 00:05:23.252 SYMLINK libspdk_rdma_provider.so 00:05:23.252 SO libspdk_jsonrpc.so.6.0 00:05:23.252 SYMLINK libspdk_jsonrpc.so 00:05:23.509 CC lib/rpc/rpc.o 00:05:23.509 LIB libspdk_env_dpdk.a 00:05:23.766 SO libspdk_env_dpdk.so.15.1 00:05:23.766 LIB libspdk_rpc.a 00:05:23.766 SO libspdk_rpc.so.6.0 00:05:23.766 SYMLINK libspdk_rpc.so 00:05:24.022 SYMLINK libspdk_env_dpdk.so 00:05:24.022 CC lib/keyring/keyring.o 00:05:24.022 CC lib/keyring/keyring_rpc.o 00:05:24.022 CC lib/notify/notify.o 00:05:24.022 CC lib/notify/notify_rpc.o 00:05:24.022 CC lib/trace/trace.o 00:05:24.022 CC lib/trace/trace_flags.o 00:05:24.022 CC lib/trace/trace_rpc.o 00:05:24.281 LIB libspdk_notify.a 00:05:24.281 LIB libspdk_keyring.a 00:05:24.281 SO libspdk_notify.so.6.0 00:05:24.281 SO libspdk_keyring.so.2.0 00:05:24.281 LIB libspdk_trace.a 00:05:24.281 SYMLINK libspdk_notify.so 00:05:24.539 SO libspdk_trace.so.11.0 00:05:24.539 SYMLINK libspdk_keyring.so 00:05:24.539 SYMLINK libspdk_trace.so 00:05:24.797 CC lib/thread/thread.o 00:05:24.797 CC lib/thread/iobuf.o 00:05:24.797 CC lib/sock/sock_rpc.o 00:05:24.797 CC lib/sock/sock.o 00:05:25.362 LIB libspdk_sock.a 00:05:25.362 SO libspdk_sock.so.10.0 00:05:25.362 SYMLINK libspdk_sock.so 00:05:25.621 CC lib/nvme/nvme_ctrlr_cmd.o 00:05:25.621 CC lib/nvme/nvme_ctrlr.o 00:05:25.621 CC lib/nvme/nvme_fabric.o 00:05:25.621 CC lib/nvme/nvme_ns_cmd.o 00:05:25.621 CC lib/nvme/nvme_ns.o 00:05:25.621 CC lib/nvme/nvme_pcie_common.o 00:05:25.621 CC lib/nvme/nvme_pcie.o 00:05:25.621 CC lib/nvme/nvme_qpair.o 00:05:25.621 CC lib/nvme/nvme.o 00:05:26.556 LIB libspdk_thread.a 00:05:26.556 SO libspdk_thread.so.11.0 00:05:26.556 SYMLINK libspdk_thread.so 00:05:26.556 CC lib/nvme/nvme_quirks.o 00:05:26.556 CC lib/nvme/nvme_transport.o 00:05:26.556 CC lib/nvme/nvme_discovery.o 00:05:26.556 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:05:26.814 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:05:26.814 CC lib/nvme/nvme_tcp.o 00:05:26.815 CC lib/nvme/nvme_opal.o 00:05:26.815 CC lib/nvme/nvme_io_msg.o 00:05:26.815 CC lib/nvme/nvme_poll_group.o 00:05:27.073 CC lib/nvme/nvme_zns.o 00:05:27.331 CC lib/nvme/nvme_stubs.o 00:05:27.331 CC lib/nvme/nvme_auth.o 00:05:27.331 CC lib/nvme/nvme_cuse.o 00:05:27.589 CC lib/nvme/nvme_rdma.o 00:05:27.589 CC lib/accel/accel.o 00:05:27.847 CC lib/blob/blobstore.o 00:05:27.847 CC lib/init/json_config.o 00:05:27.847 CC lib/blob/request.o 00:05:28.104 CC lib/blob/zeroes.o 00:05:28.104 CC lib/blob/blob_bs_dev.o 00:05:28.104 CC lib/init/subsystem.o 00:05:28.104 CC lib/init/subsystem_rpc.o 00:05:28.362 CC lib/init/rpc.o 00:05:28.362 CC lib/accel/accel_rpc.o 00:05:28.362 CC lib/accel/accel_sw.o 00:05:28.620 CC lib/virtio/virtio.o 00:05:28.620 CC lib/virtio/virtio_vhost_user.o 00:05:28.620 LIB libspdk_init.a 00:05:28.620 SO libspdk_init.so.6.0 00:05:28.620 CC lib/fsdev/fsdev.o 00:05:28.620 CC lib/virtio/virtio_vfio_user.o 00:05:28.620 CC lib/fsdev/fsdev_io.o 00:05:28.620 SYMLINK libspdk_init.so 00:05:28.620 CC lib/fsdev/fsdev_rpc.o 00:05:28.877 CC lib/virtio/virtio_pci.o 00:05:29.135 LIB libspdk_accel.a 00:05:29.135 CC lib/event/app.o 00:05:29.135 CC lib/event/log_rpc.o 00:05:29.135 CC lib/event/reactor.o 00:05:29.135 CC lib/event/app_rpc.o 00:05:29.135 SO libspdk_accel.so.16.0 00:05:29.135 LIB libspdk_nvme.a 00:05:29.135 SYMLINK libspdk_accel.so 00:05:29.135 CC lib/event/scheduler_static.o 00:05:29.393 LIB libspdk_virtio.a 00:05:29.393 SO libspdk_virtio.so.7.0 00:05:29.393 SO libspdk_nvme.so.15.0 00:05:29.393 SYMLINK libspdk_virtio.so 00:05:29.393 CC lib/bdev/bdev.o 00:05:29.393 CC lib/bdev/bdev_rpc.o 00:05:29.393 CC lib/bdev/part.o 00:05:29.393 CC lib/bdev/bdev_zone.o 00:05:29.393 CC lib/bdev/scsi_nvme.o 00:05:29.668 SYMLINK libspdk_nvme.so 00:05:29.668 LIB libspdk_fsdev.a 00:05:29.668 SO libspdk_fsdev.so.2.0 00:05:29.668 LIB libspdk_event.a 00:05:29.949 SYMLINK libspdk_fsdev.so 00:05:29.949 SO libspdk_event.so.14.0 00:05:29.949 SYMLINK libspdk_event.so 00:05:29.949 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:05:30.883 LIB libspdk_fuse_dispatcher.a 00:05:30.883 SO libspdk_fuse_dispatcher.so.1.0 00:05:30.883 SYMLINK libspdk_fuse_dispatcher.so 00:05:31.449 LIB libspdk_blob.a 00:05:31.449 SO libspdk_blob.so.11.0 00:05:31.449 SYMLINK libspdk_blob.so 00:05:31.707 CC lib/lvol/lvol.o 00:05:31.707 CC lib/blobfs/blobfs.o 00:05:31.707 CC lib/blobfs/tree.o 00:05:32.273 LIB libspdk_bdev.a 00:05:32.273 SO libspdk_bdev.so.17.0 00:05:32.531 SYMLINK libspdk_bdev.so 00:05:32.790 CC lib/ublk/ublk.o 00:05:32.790 CC lib/ublk/ublk_rpc.o 00:05:32.790 CC lib/nvmf/ctrlr.o 00:05:32.790 CC lib/nvmf/ctrlr_discovery.o 00:05:32.790 CC lib/nvmf/ctrlr_bdev.o 00:05:32.790 CC lib/nbd/nbd.o 00:05:32.790 CC lib/scsi/dev.o 00:05:32.790 LIB libspdk_blobfs.a 00:05:32.790 CC lib/ftl/ftl_core.o 00:05:32.790 SO libspdk_blobfs.so.10.0 00:05:32.790 LIB libspdk_lvol.a 00:05:32.790 SYMLINK libspdk_blobfs.so 00:05:32.790 CC lib/scsi/lun.o 00:05:32.790 SO libspdk_lvol.so.10.0 00:05:33.048 CC lib/scsi/port.o 00:05:33.048 SYMLINK libspdk_lvol.so 00:05:33.048 CC lib/nvmf/subsystem.o 00:05:33.048 CC lib/nvmf/nvmf.o 00:05:33.048 CC lib/nvmf/nvmf_rpc.o 00:05:33.306 CC lib/ftl/ftl_init.o 00:05:33.306 CC lib/scsi/scsi.o 00:05:33.306 CC lib/nbd/nbd_rpc.o 00:05:33.306 CC lib/ftl/ftl_layout.o 00:05:33.564 CC lib/scsi/scsi_bdev.o 00:05:33.564 LIB libspdk_nbd.a 00:05:33.564 LIB libspdk_ublk.a 00:05:33.564 CC lib/nvmf/transport.o 00:05:33.564 SO libspdk_nbd.so.7.0 00:05:33.564 SO libspdk_ublk.so.3.0 00:05:33.564 CC lib/scsi/scsi_pr.o 00:05:33.564 SYMLINK libspdk_nbd.so 00:05:33.564 CC lib/scsi/scsi_rpc.o 00:05:33.564 SYMLINK libspdk_ublk.so 00:05:33.564 CC lib/ftl/ftl_debug.o 00:05:33.821 CC lib/scsi/task.o 00:05:33.821 CC lib/nvmf/tcp.o 00:05:34.077 CC lib/nvmf/stubs.o 00:05:34.077 CC lib/ftl/ftl_io.o 00:05:34.077 CC lib/ftl/ftl_sb.o 00:05:34.077 CC lib/nvmf/mdns_server.o 00:05:34.077 LIB libspdk_scsi.a 00:05:34.077 SO libspdk_scsi.so.9.0 00:05:34.077 CC lib/ftl/ftl_l2p.o 00:05:34.333 SYMLINK libspdk_scsi.so 00:05:34.333 CC lib/nvmf/rdma.o 00:05:34.333 CC lib/nvmf/auth.o 00:05:34.333 CC lib/ftl/ftl_l2p_flat.o 00:05:34.590 CC lib/iscsi/conn.o 00:05:34.590 CC lib/ftl/ftl_nv_cache.o 00:05:34.590 CC lib/vhost/vhost.o 00:05:34.590 CC lib/vhost/vhost_rpc.o 00:05:34.590 CC lib/vhost/vhost_scsi.o 00:05:34.590 CC lib/vhost/vhost_blk.o 00:05:34.590 CC lib/ftl/ftl_band.o 00:05:35.156 CC lib/iscsi/init_grp.o 00:05:35.156 CC lib/vhost/rte_vhost_user.o 00:05:35.156 CC lib/ftl/ftl_band_ops.o 00:05:35.156 CC lib/ftl/ftl_writer.o 00:05:35.415 CC lib/iscsi/iscsi.o 00:05:35.415 CC lib/ftl/ftl_rq.o 00:05:35.415 CC lib/iscsi/param.o 00:05:35.415 CC lib/iscsi/portal_grp.o 00:05:35.673 CC lib/ftl/ftl_reloc.o 00:05:35.673 CC lib/ftl/ftl_l2p_cache.o 00:05:35.673 CC lib/ftl/ftl_p2l.o 00:05:35.673 CC lib/ftl/ftl_p2l_log.o 00:05:35.673 CC lib/ftl/mngt/ftl_mngt.o 00:05:35.930 CC lib/iscsi/tgt_node.o 00:05:35.930 CC lib/iscsi/iscsi_subsystem.o 00:05:35.930 CC lib/iscsi/iscsi_rpc.o 00:05:35.930 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:05:36.188 CC lib/iscsi/task.o 00:05:36.188 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:05:36.188 CC lib/ftl/mngt/ftl_mngt_startup.o 00:05:36.188 CC lib/ftl/mngt/ftl_mngt_md.o 00:05:36.188 LIB libspdk_vhost.a 00:05:36.188 CC lib/ftl/mngt/ftl_mngt_misc.o 00:05:36.188 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:05:36.446 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:05:36.446 SO libspdk_vhost.so.8.0 00:05:36.446 CC lib/ftl/mngt/ftl_mngt_band.o 00:05:36.446 LIB libspdk_nvmf.a 00:05:36.446 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:05:36.446 SYMLINK libspdk_vhost.so 00:05:36.446 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:05:36.446 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:05:36.704 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:05:36.704 SO libspdk_nvmf.so.20.0 00:05:36.704 CC lib/ftl/utils/ftl_conf.o 00:05:36.704 CC lib/ftl/utils/ftl_md.o 00:05:36.704 CC lib/ftl/utils/ftl_mempool.o 00:05:36.704 CC lib/ftl/utils/ftl_bitmap.o 00:05:36.704 CC lib/ftl/utils/ftl_property.o 00:05:36.961 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:05:36.961 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:05:36.961 SYMLINK libspdk_nvmf.so 00:05:36.961 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:05:36.961 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:05:36.961 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:05:36.961 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:05:36.961 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:05:36.961 LIB libspdk_iscsi.a 00:05:36.961 CC lib/ftl/upgrade/ftl_sb_v3.o 00:05:37.219 CC lib/ftl/upgrade/ftl_sb_v5.o 00:05:37.219 CC lib/ftl/nvc/ftl_nvc_dev.o 00:05:37.219 SO libspdk_iscsi.so.8.0 00:05:37.219 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:05:37.219 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:05:37.219 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:05:37.219 CC lib/ftl/base/ftl_base_dev.o 00:05:37.219 SYMLINK libspdk_iscsi.so 00:05:37.219 CC lib/ftl/base/ftl_base_bdev.o 00:05:37.219 CC lib/ftl/ftl_trace.o 00:05:37.477 LIB libspdk_ftl.a 00:05:37.735 SO libspdk_ftl.so.9.0 00:05:38.051 SYMLINK libspdk_ftl.so 00:05:38.629 CC module/env_dpdk/env_dpdk_rpc.o 00:05:38.629 CC module/accel/ioat/accel_ioat.o 00:05:38.629 CC module/blob/bdev/blob_bdev.o 00:05:38.629 CC module/sock/posix/posix.o 00:05:38.629 CC module/scheduler/dynamic/scheduler_dynamic.o 00:05:38.629 CC module/accel/iaa/accel_iaa.o 00:05:38.629 CC module/accel/dsa/accel_dsa.o 00:05:38.629 CC module/keyring/file/keyring.o 00:05:38.629 CC module/accel/error/accel_error.o 00:05:38.629 CC module/fsdev/aio/fsdev_aio.o 00:05:38.629 LIB libspdk_env_dpdk_rpc.a 00:05:38.886 SO libspdk_env_dpdk_rpc.so.6.0 00:05:38.886 LIB libspdk_scheduler_dynamic.a 00:05:38.886 SO libspdk_scheduler_dynamic.so.4.0 00:05:38.886 CC module/accel/error/accel_error_rpc.o 00:05:38.886 SYMLINK libspdk_env_dpdk_rpc.so 00:05:38.886 CC module/keyring/file/keyring_rpc.o 00:05:38.886 CC module/accel/ioat/accel_ioat_rpc.o 00:05:38.886 SYMLINK libspdk_scheduler_dynamic.so 00:05:38.886 LIB libspdk_blob_bdev.a 00:05:38.886 SO libspdk_blob_bdev.so.11.0 00:05:38.886 CC module/accel/iaa/accel_iaa_rpc.o 00:05:39.144 LIB libspdk_accel_ioat.a 00:05:39.144 SYMLINK libspdk_blob_bdev.so 00:05:39.144 LIB libspdk_keyring_file.a 00:05:39.144 CC module/fsdev/aio/fsdev_aio_rpc.o 00:05:39.144 SO libspdk_accel_ioat.so.6.0 00:05:39.144 LIB libspdk_accel_error.a 00:05:39.144 SO libspdk_keyring_file.so.2.0 00:05:39.144 CC module/accel/dsa/accel_dsa_rpc.o 00:05:39.144 SO libspdk_accel_error.so.2.0 00:05:39.144 LIB libspdk_accel_iaa.a 00:05:39.144 CC module/scheduler/gscheduler/gscheduler.o 00:05:39.144 SYMLINK libspdk_accel_ioat.so 00:05:39.144 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:05:39.144 CC module/fsdev/aio/linux_aio_mgr.o 00:05:39.144 SO libspdk_accel_iaa.so.3.0 00:05:39.144 SYMLINK libspdk_accel_error.so 00:05:39.144 SYMLINK libspdk_keyring_file.so 00:05:39.144 SYMLINK libspdk_accel_iaa.so 00:05:39.401 LIB libspdk_accel_dsa.a 00:05:39.401 SO libspdk_accel_dsa.so.5.0 00:05:39.401 LIB libspdk_scheduler_gscheduler.a 00:05:39.401 CC module/keyring/linux/keyring.o 00:05:39.401 LIB libspdk_scheduler_dpdk_governor.a 00:05:39.401 SO libspdk_scheduler_gscheduler.so.4.0 00:05:39.401 SO libspdk_scheduler_dpdk_governor.so.4.0 00:05:39.401 SYMLINK libspdk_accel_dsa.so 00:05:39.658 CC module/keyring/linux/keyring_rpc.o 00:05:39.658 LIB libspdk_fsdev_aio.a 00:05:39.658 SYMLINK libspdk_scheduler_gscheduler.so 00:05:39.658 SYMLINK libspdk_scheduler_dpdk_governor.so 00:05:39.658 CC module/bdev/delay/vbdev_delay.o 00:05:39.658 CC module/bdev/error/vbdev_error.o 00:05:39.658 CC module/blobfs/bdev/blobfs_bdev.o 00:05:39.658 SO libspdk_fsdev_aio.so.1.0 00:05:39.658 LIB libspdk_sock_posix.a 00:05:39.658 SO libspdk_sock_posix.so.6.0 00:05:39.658 LIB libspdk_keyring_linux.a 00:05:39.658 CC module/bdev/error/vbdev_error_rpc.o 00:05:39.658 SYMLINK libspdk_fsdev_aio.so 00:05:39.658 CC module/bdev/delay/vbdev_delay_rpc.o 00:05:39.658 SO libspdk_keyring_linux.so.1.0 00:05:39.916 SYMLINK libspdk_sock_posix.so 00:05:39.916 CC module/bdev/lvol/vbdev_lvol.o 00:05:39.916 CC module/bdev/malloc/bdev_malloc.o 00:05:39.916 CC module/bdev/gpt/gpt.o 00:05:39.916 SYMLINK libspdk_keyring_linux.so 00:05:39.916 CC module/bdev/gpt/vbdev_gpt.o 00:05:39.916 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:05:39.916 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:05:39.916 CC module/bdev/malloc/bdev_malloc_rpc.o 00:05:39.916 CC module/bdev/null/bdev_null.o 00:05:40.174 LIB libspdk_bdev_error.a 00:05:40.174 LIB libspdk_bdev_delay.a 00:05:40.174 LIB libspdk_blobfs_bdev.a 00:05:40.174 SO libspdk_bdev_error.so.6.0 00:05:40.174 SO libspdk_bdev_delay.so.6.0 00:05:40.174 SO libspdk_blobfs_bdev.so.6.0 00:05:40.174 SYMLINK libspdk_bdev_error.so 00:05:40.174 LIB libspdk_bdev_gpt.a 00:05:40.174 CC module/bdev/null/bdev_null_rpc.o 00:05:40.174 SYMLINK libspdk_blobfs_bdev.so 00:05:40.174 SYMLINK libspdk_bdev_delay.so 00:05:40.174 SO libspdk_bdev_gpt.so.6.0 00:05:40.432 SYMLINK libspdk_bdev_gpt.so 00:05:40.432 CC module/bdev/passthru/vbdev_passthru.o 00:05:40.432 CC module/bdev/nvme/bdev_nvme.o 00:05:40.432 CC module/bdev/split/vbdev_split.o 00:05:40.432 LIB libspdk_bdev_malloc.a 00:05:40.432 LIB libspdk_bdev_null.a 00:05:40.432 CC module/bdev/raid/bdev_raid.o 00:05:40.432 SO libspdk_bdev_malloc.so.6.0 00:05:40.432 CC module/bdev/zone_block/vbdev_zone_block.o 00:05:40.432 SO libspdk_bdev_null.so.6.0 00:05:40.690 LIB libspdk_bdev_lvol.a 00:05:40.690 SO libspdk_bdev_lvol.so.6.0 00:05:40.690 SYMLINK libspdk_bdev_null.so 00:05:40.690 CC module/bdev/aio/bdev_aio.o 00:05:40.690 SYMLINK libspdk_bdev_malloc.so 00:05:40.690 CC module/bdev/aio/bdev_aio_rpc.o 00:05:40.690 CC module/bdev/ftl/bdev_ftl.o 00:05:40.690 SYMLINK libspdk_bdev_lvol.so 00:05:40.690 CC module/bdev/raid/bdev_raid_rpc.o 00:05:40.690 CC module/bdev/split/vbdev_split_rpc.o 00:05:40.948 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:05:40.948 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:40.948 CC module/bdev/iscsi/bdev_iscsi.o 00:05:40.948 LIB libspdk_bdev_split.a 00:05:40.948 SO libspdk_bdev_split.so.6.0 00:05:40.948 CC module/bdev/ftl/bdev_ftl_rpc.o 00:05:40.948 CC module/bdev/raid/bdev_raid_sb.o 00:05:41.206 SYMLINK libspdk_bdev_split.so 00:05:41.206 CC module/bdev/raid/raid0.o 00:05:41.206 LIB libspdk_bdev_passthru.a 00:05:41.206 CC module/bdev/raid/raid1.o 00:05:41.206 LIB libspdk_bdev_zone_block.a 00:05:41.206 SO libspdk_bdev_passthru.so.6.0 00:05:41.206 SO libspdk_bdev_zone_block.so.6.0 00:05:41.206 LIB libspdk_bdev_aio.a 00:05:41.206 SYMLINK libspdk_bdev_passthru.so 00:05:41.206 CC module/bdev/raid/concat.o 00:05:41.206 SYMLINK libspdk_bdev_zone_block.so 00:05:41.206 LIB libspdk_bdev_ftl.a 00:05:41.206 SO libspdk_bdev_aio.so.6.0 00:05:41.465 SO libspdk_bdev_ftl.so.6.0 00:05:41.465 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:05:41.465 SYMLINK libspdk_bdev_aio.so 00:05:41.465 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:41.465 CC module/bdev/nvme/nvme_rpc.o 00:05:41.465 SYMLINK libspdk_bdev_ftl.so 00:05:41.465 CC module/bdev/nvme/bdev_mdns_client.o 00:05:41.465 CC module/bdev/virtio/bdev_virtio_scsi.o 00:05:41.465 CC module/bdev/virtio/bdev_virtio_blk.o 00:05:41.465 CC module/bdev/nvme/vbdev_opal.o 00:05:41.722 LIB libspdk_bdev_iscsi.a 00:05:41.722 CC module/bdev/virtio/bdev_virtio_rpc.o 00:05:41.722 SO libspdk_bdev_iscsi.so.6.0 00:05:41.980 SYMLINK libspdk_bdev_iscsi.so 00:05:41.980 CC module/bdev/nvme/vbdev_opal_rpc.o 00:05:41.980 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:05:41.980 LIB libspdk_bdev_raid.a 00:05:41.980 SO libspdk_bdev_raid.so.6.0 00:05:42.238 SYMLINK libspdk_bdev_raid.so 00:05:42.238 LIB libspdk_bdev_virtio.a 00:05:42.497 SO libspdk_bdev_virtio.so.6.0 00:05:42.498 SYMLINK libspdk_bdev_virtio.so 00:05:43.875 LIB libspdk_bdev_nvme.a 00:05:43.875 SO libspdk_bdev_nvme.so.7.1 00:05:43.875 SYMLINK libspdk_bdev_nvme.so 00:05:44.441 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:44.441 CC module/event/subsystems/vmd/vmd.o 00:05:44.441 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:44.441 CC module/event/subsystems/iobuf/iobuf.o 00:05:44.441 CC module/event/subsystems/sock/sock.o 00:05:44.441 CC module/event/subsystems/keyring/keyring.o 00:05:44.441 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:44.441 CC module/event/subsystems/scheduler/scheduler.o 00:05:44.441 CC module/event/subsystems/fsdev/fsdev.o 00:05:44.718 LIB libspdk_event_sock.a 00:05:44.718 LIB libspdk_event_keyring.a 00:05:44.718 SO libspdk_event_sock.so.5.0 00:05:44.718 LIB libspdk_event_vhost_blk.a 00:05:44.718 LIB libspdk_event_fsdev.a 00:05:44.718 LIB libspdk_event_vmd.a 00:05:44.718 SO libspdk_event_keyring.so.1.0 00:05:44.718 LIB libspdk_event_scheduler.a 00:05:44.718 SYMLINK libspdk_event_sock.so 00:05:44.718 SO libspdk_event_vhost_blk.so.3.0 00:05:44.718 SO libspdk_event_vmd.so.6.0 00:05:44.718 SO libspdk_event_fsdev.so.1.0 00:05:44.718 LIB libspdk_event_iobuf.a 00:05:44.718 SO libspdk_event_scheduler.so.4.0 00:05:44.718 SYMLINK libspdk_event_keyring.so 00:05:44.718 SO libspdk_event_iobuf.so.3.0 00:05:44.718 SYMLINK libspdk_event_fsdev.so 00:05:44.718 SYMLINK libspdk_event_vhost_blk.so 00:05:44.718 SYMLINK libspdk_event_vmd.so 00:05:44.718 SYMLINK libspdk_event_scheduler.so 00:05:44.718 SYMLINK libspdk_event_iobuf.so 00:05:45.015 CC module/event/subsystems/accel/accel.o 00:05:45.274 LIB libspdk_event_accel.a 00:05:45.274 SO libspdk_event_accel.so.6.0 00:05:45.274 SYMLINK libspdk_event_accel.so 00:05:45.532 CC module/event/subsystems/bdev/bdev.o 00:05:45.791 LIB libspdk_event_bdev.a 00:05:45.791 SO libspdk_event_bdev.so.6.0 00:05:45.791 SYMLINK libspdk_event_bdev.so 00:05:46.049 CC module/event/subsystems/scsi/scsi.o 00:05:46.049 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:46.049 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:46.049 CC module/event/subsystems/nbd/nbd.o 00:05:46.049 CC module/event/subsystems/ublk/ublk.o 00:05:46.307 LIB libspdk_event_nbd.a 00:05:46.307 LIB libspdk_event_ublk.a 00:05:46.307 SO libspdk_event_nbd.so.6.0 00:05:46.307 SO libspdk_event_ublk.so.3.0 00:05:46.307 LIB libspdk_event_scsi.a 00:05:46.307 LIB libspdk_event_nvmf.a 00:05:46.565 SO libspdk_event_scsi.so.6.0 00:05:46.565 SYMLINK libspdk_event_nbd.so 00:05:46.565 SYMLINK libspdk_event_ublk.so 00:05:46.565 SO libspdk_event_nvmf.so.6.0 00:05:46.565 SYMLINK libspdk_event_scsi.so 00:05:46.565 SYMLINK libspdk_event_nvmf.so 00:05:46.822 CC module/event/subsystems/iscsi/iscsi.o 00:05:46.822 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:46.822 LIB libspdk_event_vhost_scsi.a 00:05:46.822 SO libspdk_event_vhost_scsi.so.3.0 00:05:47.080 LIB libspdk_event_iscsi.a 00:05:47.080 SYMLINK libspdk_event_vhost_scsi.so 00:05:47.081 SO libspdk_event_iscsi.so.6.0 00:05:47.081 SYMLINK libspdk_event_iscsi.so 00:05:47.081 SO libspdk.so.6.0 00:05:47.081 SYMLINK libspdk.so 00:05:47.341 CXX app/trace/trace.o 00:05:47.341 CC test/rpc_client/rpc_client_test.o 00:05:47.341 TEST_HEADER include/spdk/accel.h 00:05:47.341 CC app/trace_record/trace_record.o 00:05:47.341 TEST_HEADER include/spdk/accel_module.h 00:05:47.341 TEST_HEADER include/spdk/assert.h 00:05:47.341 TEST_HEADER include/spdk/barrier.h 00:05:47.341 TEST_HEADER include/spdk/base64.h 00:05:47.341 TEST_HEADER include/spdk/bdev.h 00:05:47.341 TEST_HEADER include/spdk/bdev_module.h 00:05:47.341 TEST_HEADER include/spdk/bdev_zone.h 00:05:47.599 TEST_HEADER include/spdk/bit_array.h 00:05:47.599 TEST_HEADER include/spdk/bit_pool.h 00:05:47.599 TEST_HEADER include/spdk/blob_bdev.h 00:05:47.599 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:47.599 TEST_HEADER include/spdk/blobfs.h 00:05:47.599 TEST_HEADER include/spdk/blob.h 00:05:47.599 TEST_HEADER include/spdk/conf.h 00:05:47.599 TEST_HEADER include/spdk/config.h 00:05:47.599 TEST_HEADER include/spdk/cpuset.h 00:05:47.599 TEST_HEADER include/spdk/crc16.h 00:05:47.599 TEST_HEADER include/spdk/crc32.h 00:05:47.599 TEST_HEADER include/spdk/crc64.h 00:05:47.599 TEST_HEADER include/spdk/dif.h 00:05:47.599 TEST_HEADER include/spdk/dma.h 00:05:47.599 TEST_HEADER include/spdk/endian.h 00:05:47.599 TEST_HEADER include/spdk/env_dpdk.h 00:05:47.599 CC app/nvmf_tgt/nvmf_main.o 00:05:47.599 TEST_HEADER include/spdk/env.h 00:05:47.599 TEST_HEADER include/spdk/event.h 00:05:47.599 TEST_HEADER include/spdk/fd_group.h 00:05:47.599 TEST_HEADER include/spdk/fd.h 00:05:47.599 TEST_HEADER include/spdk/file.h 00:05:47.599 TEST_HEADER include/spdk/fsdev.h 00:05:47.599 TEST_HEADER include/spdk/fsdev_module.h 00:05:47.599 TEST_HEADER include/spdk/ftl.h 00:05:47.599 TEST_HEADER include/spdk/fuse_dispatcher.h 00:05:47.599 TEST_HEADER include/spdk/gpt_spec.h 00:05:47.599 TEST_HEADER include/spdk/hexlify.h 00:05:47.599 TEST_HEADER include/spdk/histogram_data.h 00:05:47.599 TEST_HEADER include/spdk/idxd.h 00:05:47.599 TEST_HEADER include/spdk/idxd_spec.h 00:05:47.599 TEST_HEADER include/spdk/init.h 00:05:47.599 TEST_HEADER include/spdk/ioat.h 00:05:47.599 TEST_HEADER include/spdk/ioat_spec.h 00:05:47.599 TEST_HEADER include/spdk/iscsi_spec.h 00:05:47.599 TEST_HEADER include/spdk/json.h 00:05:47.599 TEST_HEADER include/spdk/jsonrpc.h 00:05:47.599 TEST_HEADER include/spdk/keyring.h 00:05:47.599 TEST_HEADER include/spdk/keyring_module.h 00:05:47.599 TEST_HEADER include/spdk/likely.h 00:05:47.599 TEST_HEADER include/spdk/log.h 00:05:47.599 TEST_HEADER include/spdk/lvol.h 00:05:47.599 TEST_HEADER include/spdk/md5.h 00:05:47.599 CC examples/util/zipf/zipf.o 00:05:47.599 TEST_HEADER include/spdk/memory.h 00:05:47.599 TEST_HEADER include/spdk/mmio.h 00:05:47.599 CC test/thread/poller_perf/poller_perf.o 00:05:47.599 TEST_HEADER include/spdk/nbd.h 00:05:47.599 TEST_HEADER include/spdk/net.h 00:05:47.599 TEST_HEADER include/spdk/notify.h 00:05:47.599 TEST_HEADER include/spdk/nvme.h 00:05:47.599 TEST_HEADER include/spdk/nvme_intel.h 00:05:47.599 CC test/app/bdev_svc/bdev_svc.o 00:05:47.599 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:47.599 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:47.599 TEST_HEADER include/spdk/nvme_spec.h 00:05:47.599 TEST_HEADER include/spdk/nvme_zns.h 00:05:47.599 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:47.599 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:47.599 TEST_HEADER include/spdk/nvmf.h 00:05:47.599 TEST_HEADER include/spdk/nvmf_spec.h 00:05:47.600 TEST_HEADER include/spdk/nvmf_transport.h 00:05:47.600 TEST_HEADER include/spdk/opal.h 00:05:47.600 TEST_HEADER include/spdk/opal_spec.h 00:05:47.600 TEST_HEADER include/spdk/pci_ids.h 00:05:47.600 CC test/dma/test_dma/test_dma.o 00:05:47.600 TEST_HEADER include/spdk/pipe.h 00:05:47.600 TEST_HEADER include/spdk/queue.h 00:05:47.600 TEST_HEADER include/spdk/reduce.h 00:05:47.600 TEST_HEADER include/spdk/rpc.h 00:05:47.600 TEST_HEADER include/spdk/scheduler.h 00:05:47.600 TEST_HEADER include/spdk/scsi.h 00:05:47.600 TEST_HEADER include/spdk/scsi_spec.h 00:05:47.600 TEST_HEADER include/spdk/sock.h 00:05:47.600 TEST_HEADER include/spdk/stdinc.h 00:05:47.600 TEST_HEADER include/spdk/string.h 00:05:47.600 TEST_HEADER include/spdk/thread.h 00:05:47.600 TEST_HEADER include/spdk/trace.h 00:05:47.600 TEST_HEADER include/spdk/trace_parser.h 00:05:47.600 TEST_HEADER include/spdk/tree.h 00:05:47.600 TEST_HEADER include/spdk/ublk.h 00:05:47.858 TEST_HEADER include/spdk/util.h 00:05:47.858 TEST_HEADER include/spdk/uuid.h 00:05:47.858 TEST_HEADER include/spdk/version.h 00:05:47.858 CC test/env/mem_callbacks/mem_callbacks.o 00:05:47.858 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:47.858 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:47.858 TEST_HEADER include/spdk/vhost.h 00:05:47.858 TEST_HEADER include/spdk/vmd.h 00:05:47.858 TEST_HEADER include/spdk/xor.h 00:05:47.858 TEST_HEADER include/spdk/zipf.h 00:05:47.858 CXX test/cpp_headers/accel.o 00:05:47.858 LINK rpc_client_test 00:05:47.858 LINK nvmf_tgt 00:05:47.858 LINK bdev_svc 00:05:47.858 LINK zipf 00:05:47.858 LINK poller_perf 00:05:48.117 LINK spdk_trace_record 00:05:48.117 CXX test/cpp_headers/accel_module.o 00:05:48.117 LINK spdk_trace 00:05:48.375 CC test/env/vtophys/vtophys.o 00:05:48.375 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:48.375 CXX test/cpp_headers/assert.o 00:05:48.375 CC test/env/memory/memory_ut.o 00:05:48.375 CC examples/ioat/perf/perf.o 00:05:48.375 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:48.633 LINK test_dma 00:05:48.633 CC app/iscsi_tgt/iscsi_tgt.o 00:05:48.633 CXX test/cpp_headers/barrier.o 00:05:48.633 LINK vtophys 00:05:48.633 CC test/event/event_perf/event_perf.o 00:05:48.633 LINK env_dpdk_post_init 00:05:48.633 LINK ioat_perf 00:05:48.891 CXX test/cpp_headers/base64.o 00:05:48.891 LINK iscsi_tgt 00:05:48.891 CXX test/cpp_headers/bdev.o 00:05:48.891 LINK mem_callbacks 00:05:48.891 CXX test/cpp_headers/bdev_module.o 00:05:48.891 LINK event_perf 00:05:48.891 CXX test/cpp_headers/bdev_zone.o 00:05:49.150 LINK nvme_fuzz 00:05:49.150 CC examples/ioat/verify/verify.o 00:05:49.150 CXX test/cpp_headers/bit_array.o 00:05:49.409 CC test/event/reactor/reactor.o 00:05:49.409 CC test/accel/dif/dif.o 00:05:49.409 CC app/spdk_tgt/spdk_tgt.o 00:05:49.409 CC test/blobfs/mkfs/mkfs.o 00:05:49.409 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:49.409 CC test/nvme/aer/aer.o 00:05:49.409 LINK verify 00:05:49.409 CXX test/cpp_headers/bit_pool.o 00:05:49.667 LINK reactor 00:05:49.667 CC test/lvol/esnap/esnap.o 00:05:49.667 LINK mkfs 00:05:49.925 LINK spdk_tgt 00:05:49.925 CXX test/cpp_headers/blob_bdev.o 00:05:49.925 CC examples/vmd/lsvmd/lsvmd.o 00:05:49.925 CC test/event/reactor_perf/reactor_perf.o 00:05:49.925 LINK aer 00:05:50.183 CC examples/vmd/led/led.o 00:05:50.183 LINK lsvmd 00:05:50.183 CXX test/cpp_headers/blobfs_bdev.o 00:05:50.183 LINK reactor_perf 00:05:50.183 CC app/spdk_lspci/spdk_lspci.o 00:05:50.183 LINK led 00:05:50.183 CC test/nvme/reset/reset.o 00:05:50.441 LINK memory_ut 00:05:50.441 CXX test/cpp_headers/blobfs.o 00:05:50.441 CXX test/cpp_headers/blob.o 00:05:50.441 LINK spdk_lspci 00:05:50.441 LINK dif 00:05:50.771 CC test/event/app_repeat/app_repeat.o 00:05:50.771 LINK reset 00:05:50.771 CC examples/idxd/perf/perf.o 00:05:50.771 CXX test/cpp_headers/conf.o 00:05:50.771 CC test/app/histogram_perf/histogram_perf.o 00:05:50.771 CC app/spdk_nvme_perf/perf.o 00:05:50.771 CC test/env/pci/pci_ut.o 00:05:50.771 LINK app_repeat 00:05:51.046 CC test/app/jsoncat/jsoncat.o 00:05:51.046 LINK histogram_perf 00:05:51.046 CXX test/cpp_headers/config.o 00:05:51.046 CXX test/cpp_headers/cpuset.o 00:05:51.046 CC test/nvme/sgl/sgl.o 00:05:51.046 LINK jsoncat 00:05:51.046 CXX test/cpp_headers/crc16.o 00:05:51.304 CC test/event/scheduler/scheduler.o 00:05:51.304 CC test/app/stub/stub.o 00:05:51.304 LINK pci_ut 00:05:51.304 CXX test/cpp_headers/crc32.o 00:05:51.304 LINK sgl 00:05:51.304 LINK idxd_perf 00:05:51.304 LINK iscsi_fuzz 00:05:51.562 LINK stub 00:05:51.562 LINK scheduler 00:05:51.562 CC test/bdev/bdevio/bdevio.o 00:05:51.562 CXX test/cpp_headers/crc64.o 00:05:51.562 CC test/nvme/e2edp/nvme_dp.o 00:05:51.821 CXX test/cpp_headers/dif.o 00:05:51.821 CXX test/cpp_headers/dma.o 00:05:51.821 CC test/nvme/overhead/overhead.o 00:05:51.821 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:51.821 CC test/nvme/err_injection/err_injection.o 00:05:52.079 LINK nvme_dp 00:05:52.079 LINK bdevio 00:05:52.079 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:52.079 CXX test/cpp_headers/endian.o 00:05:52.079 LINK overhead 00:05:52.079 LINK spdk_nvme_perf 00:05:52.079 CC test/nvme/startup/startup.o 00:05:52.337 LINK err_injection 00:05:52.337 CXX test/cpp_headers/env_dpdk.o 00:05:52.337 CXX test/cpp_headers/env.o 00:05:52.337 CC test/nvme/reserve/reserve.o 00:05:52.337 LINK startup 00:05:52.337 CC test/nvme/simple_copy/simple_copy.o 00:05:52.337 CC app/spdk_nvme_identify/identify.o 00:05:52.596 CXX test/cpp_headers/event.o 00:05:52.596 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:52.596 LINK reserve 00:05:52.596 CC test/nvme/connect_stress/connect_stress.o 00:05:52.596 LINK simple_copy 00:05:52.596 CXX test/cpp_headers/fd_group.o 00:05:52.596 CC test/nvme/boot_partition/boot_partition.o 00:05:52.854 LINK vhost_fuzz 00:05:52.854 LINK interrupt_tgt 00:05:52.854 LINK connect_stress 00:05:53.111 CXX test/cpp_headers/fd.o 00:05:53.111 LINK boot_partition 00:05:53.111 CC test/nvme/compliance/nvme_compliance.o 00:05:53.111 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:53.111 CC test/nvme/fused_ordering/fused_ordering.o 00:05:53.374 CXX test/cpp_headers/file.o 00:05:53.374 CC test/nvme/fdp/fdp.o 00:05:53.374 CC test/nvme/cuse/cuse.o 00:05:53.374 LINK doorbell_aers 00:05:53.633 CC app/spdk_nvme_discover/discovery_aer.o 00:05:53.633 LINK spdk_nvme_identify 00:05:53.633 LINK nvme_compliance 00:05:53.633 LINK fused_ordering 00:05:53.633 CXX test/cpp_headers/fsdev.o 00:05:53.633 CXX test/cpp_headers/fsdev_module.o 00:05:53.891 CXX test/cpp_headers/ftl.o 00:05:53.891 LINK spdk_nvme_discover 00:05:53.891 CXX test/cpp_headers/fuse_dispatcher.o 00:05:53.891 CXX test/cpp_headers/gpt_spec.o 00:05:53.891 LINK fdp 00:05:53.891 CXX test/cpp_headers/hexlify.o 00:05:54.149 CXX test/cpp_headers/histogram_data.o 00:05:54.149 CC app/spdk_top/spdk_top.o 00:05:54.149 CC app/spdk_dd/spdk_dd.o 00:05:54.407 CC examples/thread/thread/thread_ex.o 00:05:54.407 CC app/vhost/vhost.o 00:05:54.407 CC examples/sock/hello_world/hello_sock.o 00:05:54.407 CXX test/cpp_headers/idxd.o 00:05:54.666 LINK vhost 00:05:54.666 CC app/fio/nvme/fio_plugin.o 00:05:54.666 CXX test/cpp_headers/idxd_spec.o 00:05:54.925 LINK thread 00:05:54.925 LINK spdk_dd 00:05:54.925 LINK hello_sock 00:05:54.925 CXX test/cpp_headers/init.o 00:05:55.183 CXX test/cpp_headers/ioat.o 00:05:55.183 CXX test/cpp_headers/ioat_spec.o 00:05:55.183 LINK cuse 00:05:55.183 CXX test/cpp_headers/iscsi_spec.o 00:05:55.442 CC app/fio/bdev/fio_plugin.o 00:05:55.442 LINK spdk_top 00:05:55.442 CC examples/nvme/hello_world/hello_world.o 00:05:55.700 CXX test/cpp_headers/json.o 00:05:55.700 CC examples/accel/perf/accel_perf.o 00:05:55.700 LINK spdk_nvme 00:05:55.959 CC examples/nvme/reconnect/reconnect.o 00:05:55.959 CC examples/blob/hello_world/hello_blob.o 00:05:55.959 CXX test/cpp_headers/jsonrpc.o 00:05:55.959 CC examples/blob/cli/blobcli.o 00:05:55.959 LINK hello_world 00:05:55.959 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:56.217 CXX test/cpp_headers/keyring.o 00:05:56.217 LINK spdk_bdev 00:05:56.217 LINK reconnect 00:05:56.217 CXX test/cpp_headers/keyring_module.o 00:05:56.217 CXX test/cpp_headers/likely.o 00:05:56.217 LINK hello_blob 00:05:56.475 CC examples/nvme/arbitration/arbitration.o 00:05:56.733 LINK accel_perf 00:05:56.733 CXX test/cpp_headers/log.o 00:05:56.733 CC examples/nvme/hotplug/hotplug.o 00:05:56.733 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:56.733 LINK blobcli 00:05:56.992 CXX test/cpp_headers/lvol.o 00:05:56.992 CC examples/fsdev/hello_world/hello_fsdev.o 00:05:57.251 LINK cmb_copy 00:05:57.251 LINK nvme_manage 00:05:57.251 LINK esnap 00:05:57.251 LINK arbitration 00:05:57.251 LINK hotplug 00:05:57.251 CXX test/cpp_headers/md5.o 00:05:57.510 CXX test/cpp_headers/memory.o 00:05:57.510 CXX test/cpp_headers/mmio.o 00:05:57.768 CXX test/cpp_headers/nbd.o 00:05:57.768 CC examples/bdev/hello_world/hello_bdev.o 00:05:57.768 CXX test/cpp_headers/net.o 00:05:57.768 CC examples/nvme/abort/abort.o 00:05:57.768 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:58.026 CXX test/cpp_headers/notify.o 00:05:58.026 CXX test/cpp_headers/nvme.o 00:05:58.026 LINK hello_fsdev 00:05:58.026 CXX test/cpp_headers/nvme_intel.o 00:05:58.026 CXX test/cpp_headers/nvme_ocssd.o 00:05:58.026 CC examples/bdev/bdevperf/bdevperf.o 00:05:58.285 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:58.285 LINK pmr_persistence 00:05:58.285 CXX test/cpp_headers/nvme_spec.o 00:05:58.285 LINK hello_bdev 00:05:58.285 CXX test/cpp_headers/nvme_zns.o 00:05:58.285 CXX test/cpp_headers/nvmf_cmd.o 00:05:58.544 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:58.544 LINK abort 00:05:58.544 CXX test/cpp_headers/nvmf.o 00:05:58.544 CXX test/cpp_headers/nvmf_spec.o 00:05:58.544 CXX test/cpp_headers/nvmf_transport.o 00:05:58.544 CXX test/cpp_headers/opal.o 00:05:58.803 CXX test/cpp_headers/opal_spec.o 00:05:58.803 CXX test/cpp_headers/pci_ids.o 00:05:58.803 CXX test/cpp_headers/pipe.o 00:05:59.062 CXX test/cpp_headers/queue.o 00:05:59.062 CXX test/cpp_headers/reduce.o 00:05:59.062 CXX test/cpp_headers/rpc.o 00:05:59.062 CXX test/cpp_headers/scheduler.o 00:05:59.062 CXX test/cpp_headers/scsi.o 00:05:59.062 CXX test/cpp_headers/scsi_spec.o 00:05:59.062 CXX test/cpp_headers/sock.o 00:05:59.062 CXX test/cpp_headers/stdinc.o 00:05:59.062 CXX test/cpp_headers/string.o 00:05:59.320 CXX test/cpp_headers/thread.o 00:05:59.320 CXX test/cpp_headers/trace.o 00:05:59.320 CXX test/cpp_headers/trace_parser.o 00:05:59.320 CXX test/cpp_headers/ublk.o 00:05:59.320 CXX test/cpp_headers/tree.o 00:05:59.320 CXX test/cpp_headers/util.o 00:05:59.320 CXX test/cpp_headers/uuid.o 00:05:59.320 CXX test/cpp_headers/version.o 00:05:59.320 CXX test/cpp_headers/vfio_user_pci.o 00:05:59.320 CXX test/cpp_headers/vfio_user_spec.o 00:05:59.579 LINK bdevperf 00:05:59.579 CXX test/cpp_headers/vhost.o 00:05:59.579 CXX test/cpp_headers/vmd.o 00:05:59.579 CXX test/cpp_headers/xor.o 00:05:59.579 CXX test/cpp_headers/zipf.o 00:06:00.145 CC examples/nvmf/nvmf/nvmf.o 00:06:00.715 LINK nvmf 00:06:00.979 00:06:00.979 real 1m40.551s 00:06:00.979 user 9m47.826s 00:06:00.979 sys 1m49.001s 00:06:00.979 11:18:46 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:06:00.979 11:18:46 make -- common/autotest_common.sh@10 -- $ set +x 00:06:00.979 ************************************ 00:06:00.979 END TEST make 00:06:00.979 ************************************ 00:06:00.979 11:18:46 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:06:00.979 11:18:46 -- pm/common@29 -- $ signal_monitor_resources TERM 00:06:00.979 11:18:46 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:06:00.979 11:18:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:00.979 11:18:46 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:06:00.979 11:18:46 -- pm/common@44 -- $ pid=5301 00:06:00.979 11:18:46 -- pm/common@50 -- $ kill -TERM 5301 00:06:00.979 11:18:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:00.979 11:18:46 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:06:00.979 11:18:46 -- pm/common@44 -- $ pid=5302 00:06:00.979 11:18:46 -- pm/common@50 -- $ kill -TERM 5302 00:06:00.979 11:18:46 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:06:00.979 11:18:46 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:06:01.239 11:18:46 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:01.239 11:18:46 -- common/autotest_common.sh@1693 -- # lcov --version 00:06:01.239 11:18:46 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:01.239 11:18:46 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:01.239 11:18:46 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:01.239 11:18:46 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:01.239 11:18:46 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:01.239 11:18:46 -- scripts/common.sh@336 -- # IFS=.-: 00:06:01.239 11:18:46 -- scripts/common.sh@336 -- # read -ra ver1 00:06:01.239 11:18:46 -- scripts/common.sh@337 -- # IFS=.-: 00:06:01.239 11:18:46 -- scripts/common.sh@337 -- # read -ra ver2 00:06:01.239 11:18:46 -- scripts/common.sh@338 -- # local 'op=<' 00:06:01.239 11:18:46 -- scripts/common.sh@340 -- # ver1_l=2 00:06:01.239 11:18:46 -- scripts/common.sh@341 -- # ver2_l=1 00:06:01.239 11:18:46 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:01.239 11:18:46 -- scripts/common.sh@344 -- # case "$op" in 00:06:01.239 11:18:46 -- scripts/common.sh@345 -- # : 1 00:06:01.239 11:18:46 -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:01.239 11:18:46 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:01.239 11:18:46 -- scripts/common.sh@365 -- # decimal 1 00:06:01.239 11:18:46 -- scripts/common.sh@353 -- # local d=1 00:06:01.239 11:18:46 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:01.239 11:18:46 -- scripts/common.sh@355 -- # echo 1 00:06:01.239 11:18:46 -- scripts/common.sh@365 -- # ver1[v]=1 00:06:01.239 11:18:46 -- scripts/common.sh@366 -- # decimal 2 00:06:01.239 11:18:46 -- scripts/common.sh@353 -- # local d=2 00:06:01.239 11:18:46 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:01.239 11:18:46 -- scripts/common.sh@355 -- # echo 2 00:06:01.239 11:18:46 -- scripts/common.sh@366 -- # ver2[v]=2 00:06:01.239 11:18:46 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:01.239 11:18:46 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:01.239 11:18:46 -- scripts/common.sh@368 -- # return 0 00:06:01.239 11:18:46 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:01.239 11:18:46 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:01.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.239 --rc genhtml_branch_coverage=1 00:06:01.239 --rc genhtml_function_coverage=1 00:06:01.239 --rc genhtml_legend=1 00:06:01.239 --rc geninfo_all_blocks=1 00:06:01.239 --rc geninfo_unexecuted_blocks=1 00:06:01.239 00:06:01.239 ' 00:06:01.239 11:18:46 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:01.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.239 --rc genhtml_branch_coverage=1 00:06:01.239 --rc genhtml_function_coverage=1 00:06:01.239 --rc genhtml_legend=1 00:06:01.239 --rc geninfo_all_blocks=1 00:06:01.239 --rc geninfo_unexecuted_blocks=1 00:06:01.239 00:06:01.239 ' 00:06:01.239 11:18:46 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:01.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.239 --rc genhtml_branch_coverage=1 00:06:01.239 --rc genhtml_function_coverage=1 00:06:01.239 --rc genhtml_legend=1 00:06:01.239 --rc geninfo_all_blocks=1 00:06:01.239 --rc geninfo_unexecuted_blocks=1 00:06:01.239 00:06:01.239 ' 00:06:01.239 11:18:46 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:01.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.239 --rc genhtml_branch_coverage=1 00:06:01.239 --rc genhtml_function_coverage=1 00:06:01.239 --rc genhtml_legend=1 00:06:01.239 --rc geninfo_all_blocks=1 00:06:01.239 --rc geninfo_unexecuted_blocks=1 00:06:01.239 00:06:01.239 ' 00:06:01.239 11:18:46 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:01.239 11:18:46 -- nvmf/common.sh@7 -- # uname -s 00:06:01.239 11:18:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:01.239 11:18:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:01.239 11:18:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:01.239 11:18:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:01.239 11:18:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:01.239 11:18:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:01.239 11:18:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:01.239 11:18:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:01.239 11:18:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:01.239 11:18:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:01.239 11:18:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:06:01.239 11:18:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=806d442c-08ba-4719-b76d-33169283459a 00:06:01.239 11:18:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:01.239 11:18:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:01.239 11:18:46 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:01.239 11:18:46 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:01.239 11:18:46 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:01.239 11:18:46 -- scripts/common.sh@15 -- # shopt -s extglob 00:06:01.239 11:18:46 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:01.239 11:18:46 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:01.239 11:18:46 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:01.239 11:18:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.239 11:18:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.239 11:18:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.240 11:18:46 -- paths/export.sh@5 -- # export PATH 00:06:01.240 11:18:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.240 11:18:46 -- nvmf/common.sh@51 -- # : 0 00:06:01.240 11:18:46 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:01.240 11:18:46 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:01.240 11:18:46 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:01.240 11:18:46 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:01.240 11:18:46 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:01.240 11:18:46 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:01.240 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:01.240 11:18:46 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:01.240 11:18:46 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:01.240 11:18:46 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:01.240 11:18:46 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:06:01.240 11:18:46 -- spdk/autotest.sh@32 -- # uname -s 00:06:01.240 11:18:46 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:06:01.240 11:18:46 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:06:01.240 11:18:46 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:06:01.240 11:18:46 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:06:01.240 11:18:46 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:06:01.240 11:18:46 -- spdk/autotest.sh@44 -- # modprobe nbd 00:06:01.240 11:18:46 -- spdk/autotest.sh@46 -- # type -P udevadm 00:06:01.240 11:18:46 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:06:01.240 11:18:46 -- spdk/autotest.sh@48 -- # udevadm_pid=56238 00:06:01.240 11:18:46 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:06:01.240 11:18:46 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:06:01.240 11:18:46 -- pm/common@17 -- # local monitor 00:06:01.240 11:18:46 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:01.240 11:18:46 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:01.240 11:18:46 -- pm/common@25 -- # sleep 1 00:06:01.240 11:18:46 -- pm/common@21 -- # date +%s 00:06:01.498 11:18:46 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732101526 00:06:01.498 11:18:46 -- pm/common@21 -- # date +%s 00:06:01.498 11:18:46 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732101526 00:06:01.498 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732101526_collect-vmstat.pm.log 00:06:01.499 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732101526_collect-cpu-load.pm.log 00:06:02.438 11:18:47 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:06:02.438 11:18:47 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:06:02.438 11:18:47 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:02.438 11:18:47 -- common/autotest_common.sh@10 -- # set +x 00:06:02.438 11:18:47 -- spdk/autotest.sh@59 -- # create_test_list 00:06:02.438 11:18:47 -- common/autotest_common.sh@752 -- # xtrace_disable 00:06:02.439 11:18:47 -- common/autotest_common.sh@10 -- # set +x 00:06:02.439 11:18:47 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:06:02.439 11:18:47 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:06:02.439 11:18:47 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:06:02.439 11:18:47 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:06:02.439 11:18:47 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:06:02.439 11:18:47 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:06:02.439 11:18:47 -- common/autotest_common.sh@1457 -- # uname 00:06:02.439 11:18:47 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:06:02.439 11:18:47 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:06:02.439 11:18:47 -- common/autotest_common.sh@1477 -- # uname 00:06:02.439 11:18:47 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:06:02.439 11:18:47 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:06:02.439 11:18:47 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:06:02.439 lcov: LCOV version 1.15 00:06:02.439 11:18:47 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:06:24.378 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:06:24.378 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:06:42.477 11:19:25 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:06:42.477 11:19:25 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:42.477 11:19:25 -- common/autotest_common.sh@10 -- # set +x 00:06:42.477 11:19:25 -- spdk/autotest.sh@78 -- # rm -f 00:06:42.477 11:19:25 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:42.477 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:42.477 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:06:42.477 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:06:42.477 11:19:25 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:06:42.477 11:19:25 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:06:42.477 11:19:25 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:06:42.477 11:19:25 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:06:42.477 11:19:25 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:42.477 11:19:25 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:06:42.477 11:19:25 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:06:42.477 11:19:25 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:42.477 11:19:25 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:42.477 11:19:25 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:42.477 11:19:25 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:06:42.477 11:19:25 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:06:42.477 11:19:25 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:42.477 11:19:25 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:42.477 11:19:25 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:42.477 11:19:25 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:06:42.477 11:19:25 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:06:42.477 11:19:25 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:06:42.477 11:19:25 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:42.477 11:19:25 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:42.477 11:19:25 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:06:42.477 11:19:25 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:06:42.478 11:19:25 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:06:42.478 11:19:25 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:42.478 11:19:25 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:06:42.478 11:19:25 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:42.478 11:19:25 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:42.478 11:19:25 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:06:42.478 11:19:25 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:06:42.478 11:19:25 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:06:42.478 No valid GPT data, bailing 00:06:42.478 11:19:26 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:42.478 11:19:26 -- scripts/common.sh@394 -- # pt= 00:06:42.478 11:19:26 -- scripts/common.sh@395 -- # return 1 00:06:42.478 11:19:26 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:06:42.478 1+0 records in 00:06:42.478 1+0 records out 00:06:42.478 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00305962 s, 343 MB/s 00:06:42.478 11:19:26 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:42.478 11:19:26 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:42.478 11:19:26 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:06:42.478 11:19:26 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:06:42.478 11:19:26 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:06:42.478 No valid GPT data, bailing 00:06:42.478 11:19:26 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:06:42.478 11:19:26 -- scripts/common.sh@394 -- # pt= 00:06:42.478 11:19:26 -- scripts/common.sh@395 -- # return 1 00:06:42.478 11:19:26 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:06:42.478 1+0 records in 00:06:42.478 1+0 records out 00:06:42.478 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00384195 s, 273 MB/s 00:06:42.478 11:19:26 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:42.478 11:19:26 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:42.478 11:19:26 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:06:42.478 11:19:26 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:06:42.478 11:19:26 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:06:42.478 No valid GPT data, bailing 00:06:42.478 11:19:26 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:06:42.478 11:19:26 -- scripts/common.sh@394 -- # pt= 00:06:42.478 11:19:26 -- scripts/common.sh@395 -- # return 1 00:06:42.478 11:19:26 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:06:42.478 1+0 records in 00:06:42.478 1+0 records out 00:06:42.478 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00420174 s, 250 MB/s 00:06:42.478 11:19:26 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:42.478 11:19:26 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:42.478 11:19:26 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:06:42.478 11:19:26 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:06:42.478 11:19:26 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:06:42.478 No valid GPT data, bailing 00:06:42.478 11:19:26 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:06:42.478 11:19:26 -- scripts/common.sh@394 -- # pt= 00:06:42.478 11:19:26 -- scripts/common.sh@395 -- # return 1 00:06:42.478 11:19:26 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:06:42.478 1+0 records in 00:06:42.478 1+0 records out 00:06:42.478 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00358353 s, 293 MB/s 00:06:42.478 11:19:26 -- spdk/autotest.sh@105 -- # sync 00:06:42.478 11:19:26 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:06:42.478 11:19:26 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:06:42.478 11:19:26 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:06:42.736 11:19:28 -- spdk/autotest.sh@111 -- # uname -s 00:06:42.736 11:19:28 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:06:42.736 11:19:28 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:06:42.736 11:19:28 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:43.303 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:43.303 Hugepages 00:06:43.303 node hugesize free / total 00:06:43.303 node0 1048576kB 0 / 0 00:06:43.303 node0 2048kB 0 / 0 00:06:43.303 00:06:43.303 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:43.303 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:43.303 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:06:43.561 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:06:43.561 11:19:28 -- spdk/autotest.sh@117 -- # uname -s 00:06:43.561 11:19:28 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:06:43.561 11:19:28 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:06:43.561 11:19:28 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:44.134 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:44.134 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:44.134 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:44.399 11:19:29 -- common/autotest_common.sh@1517 -- # sleep 1 00:06:45.334 11:19:30 -- common/autotest_common.sh@1518 -- # bdfs=() 00:06:45.334 11:19:30 -- common/autotest_common.sh@1518 -- # local bdfs 00:06:45.334 11:19:30 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:06:45.334 11:19:30 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:06:45.334 11:19:30 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:45.334 11:19:30 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:45.334 11:19:30 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:45.334 11:19:30 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:45.334 11:19:30 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:45.334 11:19:30 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:06:45.334 11:19:30 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:45.334 11:19:30 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:45.592 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:45.592 Waiting for block devices as requested 00:06:45.592 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:45.852 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:45.852 11:19:31 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:45.852 11:19:31 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:06:45.852 11:19:31 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:45.852 11:19:31 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:06:45.852 11:19:31 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:45.852 11:19:31 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:06:45.852 11:19:31 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:45.852 11:19:31 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:06:45.852 11:19:31 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:06:45.852 11:19:31 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:06:45.852 11:19:31 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:06:45.852 11:19:31 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:45.852 11:19:31 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:45.852 11:19:31 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:06:45.852 11:19:31 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:45.852 11:19:31 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:45.852 11:19:31 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:06:45.852 11:19:31 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:45.852 11:19:31 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:45.852 11:19:31 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:45.852 11:19:31 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:45.852 11:19:31 -- common/autotest_common.sh@1543 -- # continue 00:06:45.852 11:19:31 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:45.852 11:19:31 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:06:45.852 11:19:31 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:45.852 11:19:31 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:06:45.852 11:19:31 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:45.852 11:19:31 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:06:45.852 11:19:31 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:45.852 11:19:31 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:06:45.852 11:19:31 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:06:45.852 11:19:31 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:06:45.852 11:19:31 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:06:45.852 11:19:31 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:45.852 11:19:31 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:46.111 11:19:31 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:06:46.111 11:19:31 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:46.111 11:19:31 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:46.111 11:19:31 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:06:46.111 11:19:31 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:46.111 11:19:31 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:46.111 11:19:31 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:46.111 11:19:31 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:46.111 11:19:31 -- common/autotest_common.sh@1543 -- # continue 00:06:46.111 11:19:31 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:06:46.111 11:19:31 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:46.111 11:19:31 -- common/autotest_common.sh@10 -- # set +x 00:06:46.111 11:19:31 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:06:46.111 11:19:31 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:46.111 11:19:31 -- common/autotest_common.sh@10 -- # set +x 00:06:46.111 11:19:31 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:46.678 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:46.678 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:46.678 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:46.678 11:19:32 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:06:46.678 11:19:32 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:47.009 11:19:32 -- common/autotest_common.sh@10 -- # set +x 00:06:47.009 11:19:32 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:06:47.009 11:19:32 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:06:47.009 11:19:32 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:06:47.009 11:19:32 -- common/autotest_common.sh@1563 -- # bdfs=() 00:06:47.009 11:19:32 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:06:47.009 11:19:32 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:06:47.009 11:19:32 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:06:47.009 11:19:32 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:06:47.009 11:19:32 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:47.009 11:19:32 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:47.009 11:19:32 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:47.009 11:19:32 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:47.009 11:19:32 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:47.009 11:19:32 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:06:47.009 11:19:32 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:47.009 11:19:32 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:06:47.009 11:19:32 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:06:47.009 11:19:32 -- common/autotest_common.sh@1566 -- # device=0x0010 00:06:47.009 11:19:32 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:47.009 11:19:32 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:06:47.009 11:19:32 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:06:47.009 11:19:32 -- common/autotest_common.sh@1566 -- # device=0x0010 00:06:47.009 11:19:32 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:47.009 11:19:32 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:06:47.009 11:19:32 -- common/autotest_common.sh@1572 -- # return 0 00:06:47.009 11:19:32 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:06:47.010 11:19:32 -- common/autotest_common.sh@1580 -- # return 0 00:06:47.010 11:19:32 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:06:47.010 11:19:32 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:06:47.010 11:19:32 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:47.010 11:19:32 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:47.010 11:19:32 -- spdk/autotest.sh@149 -- # timing_enter lib 00:06:47.010 11:19:32 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:47.010 11:19:32 -- common/autotest_common.sh@10 -- # set +x 00:06:47.010 11:19:32 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:06:47.010 11:19:32 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:47.010 11:19:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:47.010 11:19:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.010 11:19:32 -- common/autotest_common.sh@10 -- # set +x 00:06:47.010 ************************************ 00:06:47.010 START TEST env 00:06:47.010 ************************************ 00:06:47.010 11:19:32 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:47.010 * Looking for test storage... 00:06:47.010 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:06:47.010 11:19:32 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:47.010 11:19:32 env -- common/autotest_common.sh@1693 -- # lcov --version 00:06:47.010 11:19:32 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:47.010 11:19:32 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:47.010 11:19:32 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:47.010 11:19:32 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:47.010 11:19:32 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:47.010 11:19:32 env -- scripts/common.sh@336 -- # IFS=.-: 00:06:47.010 11:19:32 env -- scripts/common.sh@336 -- # read -ra ver1 00:06:47.010 11:19:32 env -- scripts/common.sh@337 -- # IFS=.-: 00:06:47.010 11:19:32 env -- scripts/common.sh@337 -- # read -ra ver2 00:06:47.010 11:19:32 env -- scripts/common.sh@338 -- # local 'op=<' 00:06:47.010 11:19:32 env -- scripts/common.sh@340 -- # ver1_l=2 00:06:47.010 11:19:32 env -- scripts/common.sh@341 -- # ver2_l=1 00:06:47.010 11:19:32 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:47.010 11:19:32 env -- scripts/common.sh@344 -- # case "$op" in 00:06:47.010 11:19:32 env -- scripts/common.sh@345 -- # : 1 00:06:47.010 11:19:32 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:47.010 11:19:32 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:47.010 11:19:32 env -- scripts/common.sh@365 -- # decimal 1 00:06:47.010 11:19:32 env -- scripts/common.sh@353 -- # local d=1 00:06:47.010 11:19:32 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:47.010 11:19:32 env -- scripts/common.sh@355 -- # echo 1 00:06:47.010 11:19:32 env -- scripts/common.sh@365 -- # ver1[v]=1 00:06:47.010 11:19:32 env -- scripts/common.sh@366 -- # decimal 2 00:06:47.010 11:19:32 env -- scripts/common.sh@353 -- # local d=2 00:06:47.010 11:19:32 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:47.010 11:19:32 env -- scripts/common.sh@355 -- # echo 2 00:06:47.010 11:19:32 env -- scripts/common.sh@366 -- # ver2[v]=2 00:06:47.010 11:19:32 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:47.010 11:19:32 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:47.010 11:19:32 env -- scripts/common.sh@368 -- # return 0 00:06:47.010 11:19:32 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:47.010 11:19:32 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:47.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.010 --rc genhtml_branch_coverage=1 00:06:47.010 --rc genhtml_function_coverage=1 00:06:47.010 --rc genhtml_legend=1 00:06:47.010 --rc geninfo_all_blocks=1 00:06:47.010 --rc geninfo_unexecuted_blocks=1 00:06:47.010 00:06:47.010 ' 00:06:47.010 11:19:32 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:47.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.010 --rc genhtml_branch_coverage=1 00:06:47.010 --rc genhtml_function_coverage=1 00:06:47.010 --rc genhtml_legend=1 00:06:47.010 --rc geninfo_all_blocks=1 00:06:47.010 --rc geninfo_unexecuted_blocks=1 00:06:47.010 00:06:47.010 ' 00:06:47.010 11:19:32 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:47.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.010 --rc genhtml_branch_coverage=1 00:06:47.010 --rc genhtml_function_coverage=1 00:06:47.010 --rc genhtml_legend=1 00:06:47.010 --rc geninfo_all_blocks=1 00:06:47.010 --rc geninfo_unexecuted_blocks=1 00:06:47.010 00:06:47.010 ' 00:06:47.010 11:19:32 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:47.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.010 --rc genhtml_branch_coverage=1 00:06:47.010 --rc genhtml_function_coverage=1 00:06:47.010 --rc genhtml_legend=1 00:06:47.010 --rc geninfo_all_blocks=1 00:06:47.010 --rc geninfo_unexecuted_blocks=1 00:06:47.010 00:06:47.010 ' 00:06:47.010 11:19:32 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:47.010 11:19:32 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:47.010 11:19:32 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.010 11:19:32 env -- common/autotest_common.sh@10 -- # set +x 00:06:47.010 ************************************ 00:06:47.010 START TEST env_memory 00:06:47.010 ************************************ 00:06:47.010 11:19:32 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:47.270 00:06:47.270 00:06:47.270 CUnit - A unit testing framework for C - Version 2.1-3 00:06:47.270 http://cunit.sourceforge.net/ 00:06:47.270 00:06:47.270 00:06:47.270 Suite: memory 00:06:47.270 Test: alloc and free memory map ...[2024-11-20 11:19:32.631490] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:47.270 passed 00:06:47.270 Test: mem map translation ...[2024-11-20 11:19:32.661989] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:47.270 [2024-11-20 11:19:32.662077] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:47.270 [2024-11-20 11:19:32.662158] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:47.270 [2024-11-20 11:19:32.662175] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:47.270 passed 00:06:47.270 Test: mem map registration ...[2024-11-20 11:19:32.723567] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:06:47.270 [2024-11-20 11:19:32.723688] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:06:47.270 passed 00:06:47.270 Test: mem map adjacent registrations ...passed 00:06:47.270 00:06:47.270 Run Summary: Type Total Ran Passed Failed Inactive 00:06:47.270 suites 1 1 n/a 0 0 00:06:47.270 tests 4 4 4 0 0 00:06:47.270 asserts 152 152 152 0 n/a 00:06:47.270 00:06:47.270 Elapsed time = 0.197 seconds 00:06:47.270 00:06:47.270 real 0m0.216s 00:06:47.270 user 0m0.195s 00:06:47.270 sys 0m0.016s 00:06:47.270 11:19:32 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.270 11:19:32 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:47.270 ************************************ 00:06:47.270 END TEST env_memory 00:06:47.270 ************************************ 00:06:47.270 11:19:32 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:47.270 11:19:32 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:47.271 11:19:32 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.271 11:19:32 env -- common/autotest_common.sh@10 -- # set +x 00:06:47.271 ************************************ 00:06:47.271 START TEST env_vtophys 00:06:47.271 ************************************ 00:06:47.271 11:19:32 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:47.532 EAL: lib.eal log level changed from notice to debug 00:06:47.532 EAL: Detected lcore 0 as core 0 on socket 0 00:06:47.532 EAL: Detected lcore 1 as core 0 on socket 0 00:06:47.532 EAL: Detected lcore 2 as core 0 on socket 0 00:06:47.532 EAL: Detected lcore 3 as core 0 on socket 0 00:06:47.532 EAL: Detected lcore 4 as core 0 on socket 0 00:06:47.532 EAL: Detected lcore 5 as core 0 on socket 0 00:06:47.532 EAL: Detected lcore 6 as core 0 on socket 0 00:06:47.532 EAL: Detected lcore 7 as core 0 on socket 0 00:06:47.532 EAL: Detected lcore 8 as core 0 on socket 0 00:06:47.532 EAL: Detected lcore 9 as core 0 on socket 0 00:06:47.532 EAL: Maximum logical cores by configuration: 128 00:06:47.532 EAL: Detected CPU lcores: 10 00:06:47.532 EAL: Detected NUMA nodes: 1 00:06:47.532 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:06:47.532 EAL: Detected shared linkage of DPDK 00:06:47.532 EAL: No shared files mode enabled, IPC will be disabled 00:06:47.532 EAL: Selected IOVA mode 'PA' 00:06:47.532 EAL: Probing VFIO support... 00:06:47.532 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:47.532 EAL: VFIO modules not loaded, skipping VFIO support... 00:06:47.532 EAL: Ask a virtual area of 0x2e000 bytes 00:06:47.532 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:47.532 EAL: Setting up physically contiguous memory... 00:06:47.532 EAL: Setting maximum number of open files to 524288 00:06:47.532 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:47.532 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:47.532 EAL: Ask a virtual area of 0x61000 bytes 00:06:47.532 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:47.532 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:47.532 EAL: Ask a virtual area of 0x400000000 bytes 00:06:47.532 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:47.532 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:47.532 EAL: Ask a virtual area of 0x61000 bytes 00:06:47.532 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:47.532 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:47.532 EAL: Ask a virtual area of 0x400000000 bytes 00:06:47.532 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:47.532 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:47.532 EAL: Ask a virtual area of 0x61000 bytes 00:06:47.532 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:47.532 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:47.532 EAL: Ask a virtual area of 0x400000000 bytes 00:06:47.532 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:47.532 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:47.532 EAL: Ask a virtual area of 0x61000 bytes 00:06:47.532 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:47.532 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:47.532 EAL: Ask a virtual area of 0x400000000 bytes 00:06:47.532 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:47.532 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:47.532 EAL: Hugepages will be freed exactly as allocated. 00:06:47.532 EAL: No shared files mode enabled, IPC is disabled 00:06:47.532 EAL: No shared files mode enabled, IPC is disabled 00:06:47.532 EAL: TSC frequency is ~2200000 KHz 00:06:47.532 EAL: Main lcore 0 is ready (tid=7fddec938a00;cpuset=[0]) 00:06:47.532 EAL: Trying to obtain current memory policy. 00:06:47.532 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:47.532 EAL: Restoring previous memory policy: 0 00:06:47.532 EAL: request: mp_malloc_sync 00:06:47.532 EAL: No shared files mode enabled, IPC is disabled 00:06:47.532 EAL: Heap on socket 0 was expanded by 2MB 00:06:47.532 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:47.532 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:47.532 EAL: Mem event callback 'spdk:(nil)' registered 00:06:47.532 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:06:47.532 00:06:47.532 00:06:47.532 CUnit - A unit testing framework for C - Version 2.1-3 00:06:47.532 http://cunit.sourceforge.net/ 00:06:47.532 00:06:47.532 00:06:47.532 Suite: components_suite 00:06:47.532 Test: vtophys_malloc_test ...passed 00:06:47.532 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:47.532 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:47.532 EAL: Restoring previous memory policy: 4 00:06:47.532 EAL: Calling mem event callback 'spdk:(nil)' 00:06:47.532 EAL: request: mp_malloc_sync 00:06:47.532 EAL: No shared files mode enabled, IPC is disabled 00:06:47.532 EAL: Heap on socket 0 was expanded by 4MB 00:06:47.532 EAL: Calling mem event callback 'spdk:(nil)' 00:06:47.532 EAL: request: mp_malloc_sync 00:06:47.532 EAL: No shared files mode enabled, IPC is disabled 00:06:47.532 EAL: Heap on socket 0 was shrunk by 4MB 00:06:47.532 EAL: Trying to obtain current memory policy. 00:06:47.532 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:47.532 EAL: Restoring previous memory policy: 4 00:06:47.532 EAL: Calling mem event callback 'spdk:(nil)' 00:06:47.532 EAL: request: mp_malloc_sync 00:06:47.532 EAL: No shared files mode enabled, IPC is disabled 00:06:47.532 EAL: Heap on socket 0 was expanded by 6MB 00:06:47.532 EAL: Calling mem event callback 'spdk:(nil)' 00:06:47.532 EAL: request: mp_malloc_sync 00:06:47.532 EAL: No shared files mode enabled, IPC is disabled 00:06:47.532 EAL: Heap on socket 0 was shrunk by 6MB 00:06:47.532 EAL: Trying to obtain current memory policy. 00:06:47.532 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:47.532 EAL: Restoring previous memory policy: 4 00:06:47.532 EAL: Calling mem event callback 'spdk:(nil)' 00:06:47.532 EAL: request: mp_malloc_sync 00:06:47.532 EAL: No shared files mode enabled, IPC is disabled 00:06:47.532 EAL: Heap on socket 0 was expanded by 10MB 00:06:47.532 EAL: Calling mem event callback 'spdk:(nil)' 00:06:47.532 EAL: request: mp_malloc_sync 00:06:47.532 EAL: No shared files mode enabled, IPC is disabled 00:06:47.532 EAL: Heap on socket 0 was shrunk by 10MB 00:06:47.532 EAL: Trying to obtain current memory policy. 00:06:47.532 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:47.532 EAL: Restoring previous memory policy: 4 00:06:47.532 EAL: Calling mem event callback 'spdk:(nil)' 00:06:47.532 EAL: request: mp_malloc_sync 00:06:47.532 EAL: No shared files mode enabled, IPC is disabled 00:06:47.532 EAL: Heap on socket 0 was expanded by 18MB 00:06:47.532 EAL: Calling mem event callback 'spdk:(nil)' 00:06:47.532 EAL: request: mp_malloc_sync 00:06:47.532 EAL: No shared files mode enabled, IPC is disabled 00:06:47.532 EAL: Heap on socket 0 was shrunk by 18MB 00:06:47.532 EAL: Trying to obtain current memory policy. 00:06:47.532 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:47.532 EAL: Restoring previous memory policy: 4 00:06:47.532 EAL: Calling mem event callback 'spdk:(nil)' 00:06:47.532 EAL: request: mp_malloc_sync 00:06:47.532 EAL: No shared files mode enabled, IPC is disabled 00:06:47.532 EAL: Heap on socket 0 was expanded by 34MB 00:06:47.532 EAL: Calling mem event callback 'spdk:(nil)' 00:06:47.532 EAL: request: mp_malloc_sync 00:06:47.532 EAL: No shared files mode enabled, IPC is disabled 00:06:47.532 EAL: Heap on socket 0 was shrunk by 34MB 00:06:47.532 EAL: Trying to obtain current memory policy. 00:06:47.532 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:47.532 EAL: Restoring previous memory policy: 4 00:06:47.532 EAL: Calling mem event callback 'spdk:(nil)' 00:06:47.532 EAL: request: mp_malloc_sync 00:06:47.532 EAL: No shared files mode enabled, IPC is disabled 00:06:47.532 EAL: Heap on socket 0 was expanded by 66MB 00:06:47.532 EAL: Calling mem event callback 'spdk:(nil)' 00:06:47.532 EAL: request: mp_malloc_sync 00:06:47.532 EAL: No shared files mode enabled, IPC is disabled 00:06:47.532 EAL: Heap on socket 0 was shrunk by 66MB 00:06:47.532 EAL: Trying to obtain current memory policy. 00:06:47.532 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:47.532 EAL: Restoring previous memory policy: 4 00:06:47.532 EAL: Calling mem event callback 'spdk:(nil)' 00:06:47.532 EAL: request: mp_malloc_sync 00:06:47.532 EAL: No shared files mode enabled, IPC is disabled 00:06:47.532 EAL: Heap on socket 0 was expanded by 130MB 00:06:47.532 EAL: Calling mem event callback 'spdk:(nil)' 00:06:47.532 EAL: request: mp_malloc_sync 00:06:47.532 EAL: No shared files mode enabled, IPC is disabled 00:06:47.532 EAL: Heap on socket 0 was shrunk by 130MB 00:06:47.532 EAL: Trying to obtain current memory policy. 00:06:47.532 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:47.791 EAL: Restoring previous memory policy: 4 00:06:47.791 EAL: Calling mem event callback 'spdk:(nil)' 00:06:47.791 EAL: request: mp_malloc_sync 00:06:47.791 EAL: No shared files mode enabled, IPC is disabled 00:06:47.791 EAL: Heap on socket 0 was expanded by 258MB 00:06:47.791 EAL: Calling mem event callback 'spdk:(nil)' 00:06:47.791 EAL: request: mp_malloc_sync 00:06:47.791 EAL: No shared files mode enabled, IPC is disabled 00:06:47.791 EAL: Heap on socket 0 was shrunk by 258MB 00:06:47.791 EAL: Trying to obtain current memory policy. 00:06:47.791 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:47.791 EAL: Restoring previous memory policy: 4 00:06:47.791 EAL: Calling mem event callback 'spdk:(nil)' 00:06:47.791 EAL: request: mp_malloc_sync 00:06:47.791 EAL: No shared files mode enabled, IPC is disabled 00:06:47.791 EAL: Heap on socket 0 was expanded by 514MB 00:06:47.791 EAL: Calling mem event callback 'spdk:(nil)' 00:06:47.791 EAL: request: mp_malloc_sync 00:06:47.791 EAL: No shared files mode enabled, IPC is disabled 00:06:47.791 EAL: Heap on socket 0 was shrunk by 514MB 00:06:47.791 EAL: Trying to obtain current memory policy. 00:06:47.791 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:48.051 EAL: Restoring previous memory policy: 4 00:06:48.051 EAL: Calling mem event callback 'spdk:(nil)' 00:06:48.051 EAL: request: mp_malloc_sync 00:06:48.051 EAL: No shared files mode enabled, IPC is disabled 00:06:48.051 EAL: Heap on socket 0 was expanded by 1026MB 00:06:48.051 EAL: Calling mem event callback 'spdk:(nil)' 00:06:48.309 EAL: request: mp_malloc_sync 00:06:48.309 passedEAL: No shared files mode enabled, IPC is disabled 00:06:48.309 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:48.309 00:06:48.309 00:06:48.309 Run Summary: Type Total Ran Passed Failed Inactive 00:06:48.309 suites 1 1 n/a 0 0 00:06:48.309 tests 2 2 2 0 0 00:06:48.309 asserts 5435 5435 5435 0 n/a 00:06:48.309 00:06:48.309 Elapsed time = 0.670 seconds 00:06:48.309 EAL: Calling mem event callback 'spdk:(nil)' 00:06:48.309 EAL: request: mp_malloc_sync 00:06:48.309 EAL: No shared files mode enabled, IPC is disabled 00:06:48.309 EAL: Heap on socket 0 was shrunk by 2MB 00:06:48.309 EAL: No shared files mode enabled, IPC is disabled 00:06:48.309 EAL: No shared files mode enabled, IPC is disabled 00:06:48.309 EAL: No shared files mode enabled, IPC is disabled 00:06:48.309 00:06:48.309 real 0m0.881s 00:06:48.309 user 0m0.441s 00:06:48.309 sys 0m0.306s 00:06:48.309 11:19:33 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:48.309 11:19:33 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:48.309 ************************************ 00:06:48.309 END TEST env_vtophys 00:06:48.309 ************************************ 00:06:48.309 11:19:33 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:48.309 11:19:33 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:48.309 11:19:33 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.309 11:19:33 env -- common/autotest_common.sh@10 -- # set +x 00:06:48.309 ************************************ 00:06:48.309 START TEST env_pci 00:06:48.309 ************************************ 00:06:48.309 11:19:33 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:48.309 00:06:48.309 00:06:48.309 CUnit - A unit testing framework for C - Version 2.1-3 00:06:48.309 http://cunit.sourceforge.net/ 00:06:48.309 00:06:48.309 00:06:48.309 Suite: pci 00:06:48.310 Test: pci_hook ...[2024-11-20 11:19:33.771785] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58518 has claimed it 00:06:48.310 passed 00:06:48.310 00:06:48.310 Run Summary: Type Total Ran Passed Failed Inactive 00:06:48.310 suites 1 1 n/a 0 0 00:06:48.310 tests 1 1 1 0 0 00:06:48.310 asserts 25 25 25 0 n/a 00:06:48.310 00:06:48.310 Elapsed time = 0.002 seconds 00:06:48.310 EAL: Cannot find device (10000:00:01.0) 00:06:48.310 EAL: Failed to attach device on primary process 00:06:48.310 00:06:48.310 real 0m0.016s 00:06:48.310 user 0m0.009s 00:06:48.310 sys 0m0.007s 00:06:48.310 11:19:33 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:48.310 11:19:33 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:48.310 ************************************ 00:06:48.310 END TEST env_pci 00:06:48.310 ************************************ 00:06:48.310 11:19:33 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:48.310 11:19:33 env -- env/env.sh@15 -- # uname 00:06:48.310 11:19:33 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:48.310 11:19:33 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:48.310 11:19:33 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:48.310 11:19:33 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:48.310 11:19:33 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.310 11:19:33 env -- common/autotest_common.sh@10 -- # set +x 00:06:48.310 ************************************ 00:06:48.310 START TEST env_dpdk_post_init 00:06:48.310 ************************************ 00:06:48.310 11:19:33 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:48.310 EAL: Detected CPU lcores: 10 00:06:48.310 EAL: Detected NUMA nodes: 1 00:06:48.310 EAL: Detected shared linkage of DPDK 00:06:48.310 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:48.310 EAL: Selected IOVA mode 'PA' 00:06:48.568 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:48.568 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:06:48.568 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:06:48.568 Starting DPDK initialization... 00:06:48.568 Starting SPDK post initialization... 00:06:48.568 SPDK NVMe probe 00:06:48.568 Attaching to 0000:00:10.0 00:06:48.568 Attaching to 0000:00:11.0 00:06:48.568 Attached to 0000:00:10.0 00:06:48.568 Attached to 0000:00:11.0 00:06:48.568 Cleaning up... 00:06:48.568 00:06:48.568 real 0m0.194s 00:06:48.568 user 0m0.055s 00:06:48.568 sys 0m0.037s 00:06:48.568 11:19:34 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:48.568 11:19:34 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:48.568 ************************************ 00:06:48.568 END TEST env_dpdk_post_init 00:06:48.568 ************************************ 00:06:48.568 11:19:34 env -- env/env.sh@26 -- # uname 00:06:48.568 11:19:34 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:48.568 11:19:34 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:48.568 11:19:34 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:48.568 11:19:34 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.568 11:19:34 env -- common/autotest_common.sh@10 -- # set +x 00:06:48.568 ************************************ 00:06:48.568 START TEST env_mem_callbacks 00:06:48.568 ************************************ 00:06:48.568 11:19:34 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:48.568 EAL: Detected CPU lcores: 10 00:06:48.568 EAL: Detected NUMA nodes: 1 00:06:48.568 EAL: Detected shared linkage of DPDK 00:06:48.568 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:48.568 EAL: Selected IOVA mode 'PA' 00:06:48.826 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:48.826 00:06:48.826 00:06:48.826 CUnit - A unit testing framework for C - Version 2.1-3 00:06:48.826 http://cunit.sourceforge.net/ 00:06:48.826 00:06:48.826 00:06:48.826 Suite: memory 00:06:48.826 Test: test ... 00:06:48.826 register 0x200000200000 2097152 00:06:48.826 malloc 3145728 00:06:48.826 register 0x200000400000 4194304 00:06:48.826 buf 0x200000500000 len 3145728 PASSED 00:06:48.827 malloc 64 00:06:48.827 buf 0x2000004fff40 len 64 PASSED 00:06:48.827 malloc 4194304 00:06:48.827 register 0x200000800000 6291456 00:06:48.827 buf 0x200000a00000 len 4194304 PASSED 00:06:48.827 free 0x200000500000 3145728 00:06:48.827 free 0x2000004fff40 64 00:06:48.827 unregister 0x200000400000 4194304 PASSED 00:06:48.827 free 0x200000a00000 4194304 00:06:48.827 unregister 0x200000800000 6291456 PASSED 00:06:48.827 malloc 8388608 00:06:48.827 register 0x200000400000 10485760 00:06:48.827 buf 0x200000600000 len 8388608 PASSED 00:06:48.827 free 0x200000600000 8388608 00:06:48.827 unregister 0x200000400000 10485760 PASSED 00:06:48.827 passed 00:06:48.827 00:06:48.827 Run Summary: Type Total Ran Passed Failed Inactive 00:06:48.827 suites 1 1 n/a 0 0 00:06:48.827 tests 1 1 1 0 0 00:06:48.827 asserts 15 15 15 0 n/a 00:06:48.827 00:06:48.827 Elapsed time = 0.005 seconds 00:06:48.827 00:06:48.827 real 0m0.141s 00:06:48.827 user 0m0.017s 00:06:48.827 sys 0m0.023s 00:06:48.827 11:19:34 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:48.827 11:19:34 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:48.827 ************************************ 00:06:48.827 END TEST env_mem_callbacks 00:06:48.827 ************************************ 00:06:48.827 00:06:48.827 real 0m1.847s 00:06:48.827 user 0m0.896s 00:06:48.827 sys 0m0.608s 00:06:48.827 11:19:34 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:48.827 11:19:34 env -- common/autotest_common.sh@10 -- # set +x 00:06:48.827 ************************************ 00:06:48.827 END TEST env 00:06:48.827 ************************************ 00:06:48.827 11:19:34 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:48.827 11:19:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:48.827 11:19:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.827 11:19:34 -- common/autotest_common.sh@10 -- # set +x 00:06:48.827 ************************************ 00:06:48.827 START TEST rpc 00:06:48.827 ************************************ 00:06:48.827 11:19:34 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:48.827 * Looking for test storage... 00:06:48.827 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:48.827 11:19:34 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:48.827 11:19:34 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:48.827 11:19:34 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:49.086 11:19:34 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:49.086 11:19:34 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:49.086 11:19:34 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:49.086 11:19:34 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:49.086 11:19:34 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:49.086 11:19:34 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:49.086 11:19:34 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:49.086 11:19:34 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:49.086 11:19:34 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:49.086 11:19:34 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:49.086 11:19:34 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:49.086 11:19:34 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:49.086 11:19:34 rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:49.086 11:19:34 rpc -- scripts/common.sh@345 -- # : 1 00:06:49.086 11:19:34 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:49.086 11:19:34 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:49.086 11:19:34 rpc -- scripts/common.sh@365 -- # decimal 1 00:06:49.086 11:19:34 rpc -- scripts/common.sh@353 -- # local d=1 00:06:49.086 11:19:34 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:49.086 11:19:34 rpc -- scripts/common.sh@355 -- # echo 1 00:06:49.086 11:19:34 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:49.086 11:19:34 rpc -- scripts/common.sh@366 -- # decimal 2 00:06:49.086 11:19:34 rpc -- scripts/common.sh@353 -- # local d=2 00:06:49.086 11:19:34 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:49.086 11:19:34 rpc -- scripts/common.sh@355 -- # echo 2 00:06:49.086 11:19:34 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:49.086 11:19:34 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:49.086 11:19:34 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:49.086 11:19:34 rpc -- scripts/common.sh@368 -- # return 0 00:06:49.086 11:19:34 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:49.086 11:19:34 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:49.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.086 --rc genhtml_branch_coverage=1 00:06:49.086 --rc genhtml_function_coverage=1 00:06:49.086 --rc genhtml_legend=1 00:06:49.086 --rc geninfo_all_blocks=1 00:06:49.086 --rc geninfo_unexecuted_blocks=1 00:06:49.086 00:06:49.086 ' 00:06:49.086 11:19:34 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:49.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.086 --rc genhtml_branch_coverage=1 00:06:49.086 --rc genhtml_function_coverage=1 00:06:49.086 --rc genhtml_legend=1 00:06:49.086 --rc geninfo_all_blocks=1 00:06:49.086 --rc geninfo_unexecuted_blocks=1 00:06:49.086 00:06:49.086 ' 00:06:49.086 11:19:34 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:49.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.086 --rc genhtml_branch_coverage=1 00:06:49.086 --rc genhtml_function_coverage=1 00:06:49.086 --rc genhtml_legend=1 00:06:49.086 --rc geninfo_all_blocks=1 00:06:49.086 --rc geninfo_unexecuted_blocks=1 00:06:49.086 00:06:49.086 ' 00:06:49.086 11:19:34 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:49.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.086 --rc genhtml_branch_coverage=1 00:06:49.086 --rc genhtml_function_coverage=1 00:06:49.086 --rc genhtml_legend=1 00:06:49.086 --rc geninfo_all_blocks=1 00:06:49.086 --rc geninfo_unexecuted_blocks=1 00:06:49.086 00:06:49.086 ' 00:06:49.086 11:19:34 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58635 00:06:49.086 11:19:34 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:06:49.086 11:19:34 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:49.086 11:19:34 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58635 00:06:49.086 11:19:34 rpc -- common/autotest_common.sh@835 -- # '[' -z 58635 ']' 00:06:49.086 11:19:34 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.086 11:19:34 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:49.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.086 11:19:34 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.086 11:19:34 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:49.086 11:19:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.086 [2024-11-20 11:19:34.571852] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:06:49.086 [2024-11-20 11:19:34.571996] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58635 ] 00:06:49.345 [2024-11-20 11:19:34.725506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.345 [2024-11-20 11:19:34.784121] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:49.345 [2024-11-20 11:19:34.784204] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58635' to capture a snapshot of events at runtime. 00:06:49.345 [2024-11-20 11:19:34.784226] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:49.345 [2024-11-20 11:19:34.784241] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:49.345 [2024-11-20 11:19:34.784254] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58635 for offline analysis/debug. 00:06:49.345 [2024-11-20 11:19:34.784768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.603 11:19:34 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:49.603 11:19:34 rpc -- common/autotest_common.sh@868 -- # return 0 00:06:49.603 11:19:34 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:49.603 11:19:34 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:49.603 11:19:34 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:49.603 11:19:34 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:49.603 11:19:34 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:49.603 11:19:34 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:49.603 11:19:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.603 ************************************ 00:06:49.603 START TEST rpc_integrity 00:06:49.603 ************************************ 00:06:49.603 11:19:35 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:49.603 11:19:35 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:49.603 11:19:35 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.603 11:19:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:49.603 11:19:35 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.603 11:19:35 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:49.603 11:19:35 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:49.603 11:19:35 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:49.603 11:19:35 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:49.603 11:19:35 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.603 11:19:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:49.603 11:19:35 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.604 11:19:35 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:49.604 11:19:35 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:49.604 11:19:35 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.604 11:19:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:49.604 11:19:35 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.604 11:19:35 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:49.604 { 00:06:49.604 "aliases": [ 00:06:49.604 "b643524b-5fc8-4c8f-877a-3c19a4afd230" 00:06:49.604 ], 00:06:49.604 "assigned_rate_limits": { 00:06:49.604 "r_mbytes_per_sec": 0, 00:06:49.604 "rw_ios_per_sec": 0, 00:06:49.604 "rw_mbytes_per_sec": 0, 00:06:49.604 "w_mbytes_per_sec": 0 00:06:49.604 }, 00:06:49.604 "block_size": 512, 00:06:49.604 "claimed": false, 00:06:49.604 "driver_specific": {}, 00:06:49.604 "memory_domains": [ 00:06:49.604 { 00:06:49.604 "dma_device_id": "system", 00:06:49.604 "dma_device_type": 1 00:06:49.604 }, 00:06:49.604 { 00:06:49.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:49.604 "dma_device_type": 2 00:06:49.604 } 00:06:49.604 ], 00:06:49.604 "name": "Malloc0", 00:06:49.604 "num_blocks": 16384, 00:06:49.604 "product_name": "Malloc disk", 00:06:49.604 "supported_io_types": { 00:06:49.604 "abort": true, 00:06:49.604 "compare": false, 00:06:49.604 "compare_and_write": false, 00:06:49.604 "copy": true, 00:06:49.604 "flush": true, 00:06:49.604 "get_zone_info": false, 00:06:49.604 "nvme_admin": false, 00:06:49.604 "nvme_io": false, 00:06:49.604 "nvme_io_md": false, 00:06:49.604 "nvme_iov_md": false, 00:06:49.604 "read": true, 00:06:49.604 "reset": true, 00:06:49.604 "seek_data": false, 00:06:49.604 "seek_hole": false, 00:06:49.604 "unmap": true, 00:06:49.604 "write": true, 00:06:49.604 "write_zeroes": true, 00:06:49.604 "zcopy": true, 00:06:49.604 "zone_append": false, 00:06:49.604 "zone_management": false 00:06:49.604 }, 00:06:49.604 "uuid": "b643524b-5fc8-4c8f-877a-3c19a4afd230", 00:06:49.604 "zoned": false 00:06:49.604 } 00:06:49.604 ]' 00:06:49.604 11:19:35 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:49.604 11:19:35 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:49.604 11:19:35 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:49.604 11:19:35 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.604 11:19:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:49.604 [2024-11-20 11:19:35.168859] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:49.604 [2024-11-20 11:19:35.168926] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:49.604 [2024-11-20 11:19:35.168949] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b7fba0 00:06:49.604 [2024-11-20 11:19:35.168959] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:49.604 [2024-11-20 11:19:35.170665] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:49.604 [2024-11-20 11:19:35.170700] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:49.604 Passthru0 00:06:49.604 11:19:35 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.604 11:19:35 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:49.604 11:19:35 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.604 11:19:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:49.862 11:19:35 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.862 11:19:35 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:49.862 { 00:06:49.862 "aliases": [ 00:06:49.862 "b643524b-5fc8-4c8f-877a-3c19a4afd230" 00:06:49.862 ], 00:06:49.862 "assigned_rate_limits": { 00:06:49.862 "r_mbytes_per_sec": 0, 00:06:49.862 "rw_ios_per_sec": 0, 00:06:49.862 "rw_mbytes_per_sec": 0, 00:06:49.862 "w_mbytes_per_sec": 0 00:06:49.862 }, 00:06:49.862 "block_size": 512, 00:06:49.862 "claim_type": "exclusive_write", 00:06:49.862 "claimed": true, 00:06:49.862 "driver_specific": {}, 00:06:49.862 "memory_domains": [ 00:06:49.862 { 00:06:49.862 "dma_device_id": "system", 00:06:49.862 "dma_device_type": 1 00:06:49.862 }, 00:06:49.862 { 00:06:49.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:49.862 "dma_device_type": 2 00:06:49.862 } 00:06:49.862 ], 00:06:49.862 "name": "Malloc0", 00:06:49.862 "num_blocks": 16384, 00:06:49.862 "product_name": "Malloc disk", 00:06:49.862 "supported_io_types": { 00:06:49.862 "abort": true, 00:06:49.862 "compare": false, 00:06:49.862 "compare_and_write": false, 00:06:49.862 "copy": true, 00:06:49.862 "flush": true, 00:06:49.862 "get_zone_info": false, 00:06:49.862 "nvme_admin": false, 00:06:49.862 "nvme_io": false, 00:06:49.862 "nvme_io_md": false, 00:06:49.862 "nvme_iov_md": false, 00:06:49.862 "read": true, 00:06:49.862 "reset": true, 00:06:49.862 "seek_data": false, 00:06:49.862 "seek_hole": false, 00:06:49.862 "unmap": true, 00:06:49.862 "write": true, 00:06:49.862 "write_zeroes": true, 00:06:49.862 "zcopy": true, 00:06:49.862 "zone_append": false, 00:06:49.862 "zone_management": false 00:06:49.862 }, 00:06:49.862 "uuid": "b643524b-5fc8-4c8f-877a-3c19a4afd230", 00:06:49.862 "zoned": false 00:06:49.862 }, 00:06:49.862 { 00:06:49.862 "aliases": [ 00:06:49.862 "2656dc02-c41e-53c8-9c85-c0be04404f77" 00:06:49.862 ], 00:06:49.862 "assigned_rate_limits": { 00:06:49.862 "r_mbytes_per_sec": 0, 00:06:49.862 "rw_ios_per_sec": 0, 00:06:49.862 "rw_mbytes_per_sec": 0, 00:06:49.862 "w_mbytes_per_sec": 0 00:06:49.862 }, 00:06:49.862 "block_size": 512, 00:06:49.862 "claimed": false, 00:06:49.862 "driver_specific": { 00:06:49.862 "passthru": { 00:06:49.862 "base_bdev_name": "Malloc0", 00:06:49.862 "name": "Passthru0" 00:06:49.862 } 00:06:49.862 }, 00:06:49.862 "memory_domains": [ 00:06:49.862 { 00:06:49.862 "dma_device_id": "system", 00:06:49.862 "dma_device_type": 1 00:06:49.863 }, 00:06:49.863 { 00:06:49.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:49.863 "dma_device_type": 2 00:06:49.863 } 00:06:49.863 ], 00:06:49.863 "name": "Passthru0", 00:06:49.863 "num_blocks": 16384, 00:06:49.863 "product_name": "passthru", 00:06:49.863 "supported_io_types": { 00:06:49.863 "abort": true, 00:06:49.863 "compare": false, 00:06:49.863 "compare_and_write": false, 00:06:49.863 "copy": true, 00:06:49.863 "flush": true, 00:06:49.863 "get_zone_info": false, 00:06:49.863 "nvme_admin": false, 00:06:49.863 "nvme_io": false, 00:06:49.863 "nvme_io_md": false, 00:06:49.863 "nvme_iov_md": false, 00:06:49.863 "read": true, 00:06:49.863 "reset": true, 00:06:49.863 "seek_data": false, 00:06:49.863 "seek_hole": false, 00:06:49.863 "unmap": true, 00:06:49.863 "write": true, 00:06:49.863 "write_zeroes": true, 00:06:49.863 "zcopy": true, 00:06:49.863 "zone_append": false, 00:06:49.863 "zone_management": false 00:06:49.863 }, 00:06:49.863 "uuid": "2656dc02-c41e-53c8-9c85-c0be04404f77", 00:06:49.863 "zoned": false 00:06:49.863 } 00:06:49.863 ]' 00:06:49.863 11:19:35 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:49.863 11:19:35 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:49.863 11:19:35 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:49.863 11:19:35 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.863 11:19:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:49.863 11:19:35 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.863 11:19:35 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:49.863 11:19:35 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.863 11:19:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:49.863 11:19:35 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.863 11:19:35 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:49.863 11:19:35 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.863 11:19:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:49.863 11:19:35 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.863 11:19:35 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:49.863 11:19:35 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:49.863 11:19:35 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:49.863 00:06:49.863 real 0m0.336s 00:06:49.863 user 0m0.226s 00:06:49.863 sys 0m0.035s 00:06:49.863 11:19:35 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:49.863 11:19:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:49.863 ************************************ 00:06:49.863 END TEST rpc_integrity 00:06:49.863 ************************************ 00:06:49.863 11:19:35 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:49.863 11:19:35 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:49.863 11:19:35 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:49.863 11:19:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.863 ************************************ 00:06:49.863 START TEST rpc_plugins 00:06:49.863 ************************************ 00:06:49.863 11:19:35 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:06:49.863 11:19:35 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:49.863 11:19:35 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.863 11:19:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:49.863 11:19:35 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.863 11:19:35 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:49.863 11:19:35 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:49.863 11:19:35 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.863 11:19:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:49.863 11:19:35 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.863 11:19:35 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:49.863 { 00:06:49.863 "aliases": [ 00:06:49.863 "05fbac73-a111-4b09-9d7d-f3794a003474" 00:06:49.863 ], 00:06:49.863 "assigned_rate_limits": { 00:06:49.863 "r_mbytes_per_sec": 0, 00:06:49.863 "rw_ios_per_sec": 0, 00:06:49.863 "rw_mbytes_per_sec": 0, 00:06:49.863 "w_mbytes_per_sec": 0 00:06:49.863 }, 00:06:49.863 "block_size": 4096, 00:06:49.863 "claimed": false, 00:06:49.863 "driver_specific": {}, 00:06:49.863 "memory_domains": [ 00:06:49.863 { 00:06:49.863 "dma_device_id": "system", 00:06:49.863 "dma_device_type": 1 00:06:49.863 }, 00:06:49.863 { 00:06:49.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:49.863 "dma_device_type": 2 00:06:49.863 } 00:06:49.863 ], 00:06:49.863 "name": "Malloc1", 00:06:49.863 "num_blocks": 256, 00:06:49.863 "product_name": "Malloc disk", 00:06:49.863 "supported_io_types": { 00:06:49.863 "abort": true, 00:06:49.863 "compare": false, 00:06:49.863 "compare_and_write": false, 00:06:49.863 "copy": true, 00:06:49.863 "flush": true, 00:06:49.863 "get_zone_info": false, 00:06:49.863 "nvme_admin": false, 00:06:49.863 "nvme_io": false, 00:06:49.863 "nvme_io_md": false, 00:06:49.863 "nvme_iov_md": false, 00:06:49.863 "read": true, 00:06:49.863 "reset": true, 00:06:49.863 "seek_data": false, 00:06:49.863 "seek_hole": false, 00:06:49.863 "unmap": true, 00:06:49.863 "write": true, 00:06:49.863 "write_zeroes": true, 00:06:49.863 "zcopy": true, 00:06:49.863 "zone_append": false, 00:06:49.863 "zone_management": false 00:06:49.863 }, 00:06:49.863 "uuid": "05fbac73-a111-4b09-9d7d-f3794a003474", 00:06:49.863 "zoned": false 00:06:49.863 } 00:06:49.863 ]' 00:06:49.863 11:19:35 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:50.122 11:19:35 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:50.122 11:19:35 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:50.122 11:19:35 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.122 11:19:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:50.122 11:19:35 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.122 11:19:35 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:50.122 11:19:35 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.122 11:19:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:50.122 11:19:35 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.122 11:19:35 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:50.122 11:19:35 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:50.122 11:19:35 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:50.122 00:06:50.122 real 0m0.176s 00:06:50.122 user 0m0.125s 00:06:50.122 sys 0m0.017s 00:06:50.122 11:19:35 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.122 ************************************ 00:06:50.122 END TEST rpc_plugins 00:06:50.122 ************************************ 00:06:50.122 11:19:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:50.122 11:19:35 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:50.122 11:19:35 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:50.122 11:19:35 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:50.122 11:19:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.122 ************************************ 00:06:50.122 START TEST rpc_trace_cmd_test 00:06:50.122 ************************************ 00:06:50.122 11:19:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:06:50.122 11:19:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:50.122 11:19:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:50.122 11:19:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.122 11:19:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.122 11:19:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.122 11:19:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:50.122 "bdev": { 00:06:50.122 "mask": "0x8", 00:06:50.122 "tpoint_mask": "0xffffffffffffffff" 00:06:50.122 }, 00:06:50.122 "bdev_nvme": { 00:06:50.122 "mask": "0x4000", 00:06:50.122 "tpoint_mask": "0x0" 00:06:50.122 }, 00:06:50.122 "bdev_raid": { 00:06:50.122 "mask": "0x20000", 00:06:50.122 "tpoint_mask": "0x0" 00:06:50.122 }, 00:06:50.122 "blob": { 00:06:50.122 "mask": "0x10000", 00:06:50.122 "tpoint_mask": "0x0" 00:06:50.122 }, 00:06:50.122 "blobfs": { 00:06:50.122 "mask": "0x80", 00:06:50.122 "tpoint_mask": "0x0" 00:06:50.122 }, 00:06:50.122 "dsa": { 00:06:50.122 "mask": "0x200", 00:06:50.122 "tpoint_mask": "0x0" 00:06:50.122 }, 00:06:50.122 "ftl": { 00:06:50.122 "mask": "0x40", 00:06:50.122 "tpoint_mask": "0x0" 00:06:50.122 }, 00:06:50.122 "iaa": { 00:06:50.122 "mask": "0x1000", 00:06:50.122 "tpoint_mask": "0x0" 00:06:50.122 }, 00:06:50.122 "iscsi_conn": { 00:06:50.122 "mask": "0x2", 00:06:50.122 "tpoint_mask": "0x0" 00:06:50.122 }, 00:06:50.122 "nvme_pcie": { 00:06:50.122 "mask": "0x800", 00:06:50.122 "tpoint_mask": "0x0" 00:06:50.122 }, 00:06:50.122 "nvme_tcp": { 00:06:50.122 "mask": "0x2000", 00:06:50.122 "tpoint_mask": "0x0" 00:06:50.122 }, 00:06:50.122 "nvmf_rdma": { 00:06:50.122 "mask": "0x10", 00:06:50.122 "tpoint_mask": "0x0" 00:06:50.122 }, 00:06:50.122 "nvmf_tcp": { 00:06:50.122 "mask": "0x20", 00:06:50.122 "tpoint_mask": "0x0" 00:06:50.122 }, 00:06:50.122 "scheduler": { 00:06:50.122 "mask": "0x40000", 00:06:50.122 "tpoint_mask": "0x0" 00:06:50.122 }, 00:06:50.122 "scsi": { 00:06:50.122 "mask": "0x4", 00:06:50.122 "tpoint_mask": "0x0" 00:06:50.122 }, 00:06:50.122 "sock": { 00:06:50.122 "mask": "0x8000", 00:06:50.122 "tpoint_mask": "0x0" 00:06:50.122 }, 00:06:50.122 "thread": { 00:06:50.122 "mask": "0x400", 00:06:50.122 "tpoint_mask": "0x0" 00:06:50.122 }, 00:06:50.122 "tpoint_group_mask": "0x8", 00:06:50.122 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58635" 00:06:50.122 }' 00:06:50.122 11:19:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:50.122 11:19:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:06:50.122 11:19:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:50.438 11:19:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:50.438 11:19:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:50.438 11:19:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:50.438 11:19:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:50.438 11:19:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:50.438 11:19:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:50.438 11:19:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:50.438 00:06:50.438 real 0m0.315s 00:06:50.438 user 0m0.276s 00:06:50.438 sys 0m0.027s 00:06:50.438 11:19:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.438 ************************************ 00:06:50.438 END TEST rpc_trace_cmd_test 00:06:50.438 ************************************ 00:06:50.438 11:19:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.438 11:19:35 rpc -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:06:50.438 11:19:35 rpc -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:06:50.438 11:19:35 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:50.438 11:19:35 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:50.438 11:19:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.438 ************************************ 00:06:50.438 START TEST go_rpc 00:06:50.438 ************************************ 00:06:50.438 11:19:35 rpc.go_rpc -- common/autotest_common.sh@1129 -- # go_rpc 00:06:50.438 11:19:35 rpc.go_rpc -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:06:50.438 11:19:35 rpc.go_rpc -- rpc/rpc.sh@51 -- # bdevs='[]' 00:06:50.438 11:19:35 rpc.go_rpc -- rpc/rpc.sh@52 -- # jq length 00:06:50.722 11:19:36 rpc.go_rpc -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:06:50.722 11:19:36 rpc.go_rpc -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:06:50.722 11:19:36 rpc.go_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.722 11:19:36 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.722 11:19:36 rpc.go_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.722 11:19:36 rpc.go_rpc -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:06:50.722 11:19:36 rpc.go_rpc -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:06:50.722 11:19:36 rpc.go_rpc -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["29d28177-82b7-44b3-a331-034fb85dd391"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"system","dma_device_type":1},{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"copy":true,"flush":true,"get_zone_info":false,"nvme_admin":false,"nvme_io":false,"nvme_io_md":false,"nvme_iov_md":false,"read":true,"reset":true,"seek_data":false,"seek_hole":false,"unmap":true,"write":true,"write_zeroes":true,"zcopy":true,"zone_append":false,"zone_management":false},"uuid":"29d28177-82b7-44b3-a331-034fb85dd391","zoned":false}]' 00:06:50.722 11:19:36 rpc.go_rpc -- rpc/rpc.sh@57 -- # jq length 00:06:50.722 11:19:36 rpc.go_rpc -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:06:50.722 11:19:36 rpc.go_rpc -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:50.722 11:19:36 rpc.go_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.722 11:19:36 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.722 11:19:36 rpc.go_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.722 11:19:36 rpc.go_rpc -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:06:50.722 11:19:36 rpc.go_rpc -- rpc/rpc.sh@60 -- # bdevs='[]' 00:06:50.722 11:19:36 rpc.go_rpc -- rpc/rpc.sh@61 -- # jq length 00:06:50.722 11:19:36 rpc.go_rpc -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:06:50.722 00:06:50.722 real 0m0.239s 00:06:50.722 user 0m0.159s 00:06:50.722 sys 0m0.039s 00:06:50.722 11:19:36 rpc.go_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.722 ************************************ 00:06:50.722 END TEST go_rpc 00:06:50.722 ************************************ 00:06:50.722 11:19:36 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.722 11:19:36 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:50.722 11:19:36 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:50.722 11:19:36 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:50.722 11:19:36 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:50.722 11:19:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.722 ************************************ 00:06:50.723 START TEST rpc_daemon_integrity 00:06:50.723 ************************************ 00:06:50.723 11:19:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:50.723 11:19:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:50.723 11:19:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.723 11:19:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:50.723 11:19:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.723 11:19:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:50.723 11:19:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:50.723 11:19:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:50.723 11:19:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:50.723 11:19:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.723 11:19:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:50.982 11:19:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.982 11:19:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:06:50.982 11:19:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:50.982 11:19:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.982 11:19:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:50.982 11:19:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.982 11:19:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:50.982 { 00:06:50.982 "aliases": [ 00:06:50.982 "2a6ff908-1186-44c2-ac66-9b7a5962e505" 00:06:50.982 ], 00:06:50.982 "assigned_rate_limits": { 00:06:50.982 "r_mbytes_per_sec": 0, 00:06:50.982 "rw_ios_per_sec": 0, 00:06:50.982 "rw_mbytes_per_sec": 0, 00:06:50.982 "w_mbytes_per_sec": 0 00:06:50.982 }, 00:06:50.982 "block_size": 512, 00:06:50.982 "claimed": false, 00:06:50.982 "driver_specific": {}, 00:06:50.982 "memory_domains": [ 00:06:50.982 { 00:06:50.982 "dma_device_id": "system", 00:06:50.982 "dma_device_type": 1 00:06:50.982 }, 00:06:50.982 { 00:06:50.982 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:50.982 "dma_device_type": 2 00:06:50.982 } 00:06:50.982 ], 00:06:50.982 "name": "Malloc3", 00:06:50.982 "num_blocks": 16384, 00:06:50.982 "product_name": "Malloc disk", 00:06:50.982 "supported_io_types": { 00:06:50.982 "abort": true, 00:06:50.982 "compare": false, 00:06:50.982 "compare_and_write": false, 00:06:50.982 "copy": true, 00:06:50.982 "flush": true, 00:06:50.982 "get_zone_info": false, 00:06:50.982 "nvme_admin": false, 00:06:50.982 "nvme_io": false, 00:06:50.982 "nvme_io_md": false, 00:06:50.982 "nvme_iov_md": false, 00:06:50.982 "read": true, 00:06:50.982 "reset": true, 00:06:50.982 "seek_data": false, 00:06:50.982 "seek_hole": false, 00:06:50.982 "unmap": true, 00:06:50.982 "write": true, 00:06:50.982 "write_zeroes": true, 00:06:50.982 "zcopy": true, 00:06:50.982 "zone_append": false, 00:06:50.982 "zone_management": false 00:06:50.982 }, 00:06:50.982 "uuid": "2a6ff908-1186-44c2-ac66-9b7a5962e505", 00:06:50.982 "zoned": false 00:06:50.982 } 00:06:50.982 ]' 00:06:50.982 11:19:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:50.982 11:19:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:50.982 11:19:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:06:50.982 11:19:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.982 11:19:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:50.982 [2024-11-20 11:19:36.393897] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:06:50.982 [2024-11-20 11:19:36.393960] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:50.982 [2024-11-20 11:19:36.393983] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a3fc40 00:06:50.982 [2024-11-20 11:19:36.393995] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:50.982 [2024-11-20 11:19:36.395490] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:50.982 [2024-11-20 11:19:36.395524] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:50.982 Passthru0 00:06:50.982 11:19:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.982 11:19:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:50.982 11:19:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.982 11:19:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:50.982 11:19:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.982 11:19:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:50.982 { 00:06:50.982 "aliases": [ 00:06:50.982 "2a6ff908-1186-44c2-ac66-9b7a5962e505" 00:06:50.982 ], 00:06:50.982 "assigned_rate_limits": { 00:06:50.982 "r_mbytes_per_sec": 0, 00:06:50.982 "rw_ios_per_sec": 0, 00:06:50.982 "rw_mbytes_per_sec": 0, 00:06:50.982 "w_mbytes_per_sec": 0 00:06:50.982 }, 00:06:50.982 "block_size": 512, 00:06:50.982 "claim_type": "exclusive_write", 00:06:50.982 "claimed": true, 00:06:50.982 "driver_specific": {}, 00:06:50.982 "memory_domains": [ 00:06:50.982 { 00:06:50.982 "dma_device_id": "system", 00:06:50.982 "dma_device_type": 1 00:06:50.982 }, 00:06:50.982 { 00:06:50.982 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:50.982 "dma_device_type": 2 00:06:50.982 } 00:06:50.982 ], 00:06:50.982 "name": "Malloc3", 00:06:50.982 "num_blocks": 16384, 00:06:50.982 "product_name": "Malloc disk", 00:06:50.982 "supported_io_types": { 00:06:50.982 "abort": true, 00:06:50.982 "compare": false, 00:06:50.982 "compare_and_write": false, 00:06:50.982 "copy": true, 00:06:50.982 "flush": true, 00:06:50.982 "get_zone_info": false, 00:06:50.982 "nvme_admin": false, 00:06:50.982 "nvme_io": false, 00:06:50.982 "nvme_io_md": false, 00:06:50.982 "nvme_iov_md": false, 00:06:50.982 "read": true, 00:06:50.982 "reset": true, 00:06:50.982 "seek_data": false, 00:06:50.982 "seek_hole": false, 00:06:50.982 "unmap": true, 00:06:50.982 "write": true, 00:06:50.982 "write_zeroes": true, 00:06:50.982 "zcopy": true, 00:06:50.982 "zone_append": false, 00:06:50.982 "zone_management": false 00:06:50.982 }, 00:06:50.982 "uuid": "2a6ff908-1186-44c2-ac66-9b7a5962e505", 00:06:50.982 "zoned": false 00:06:50.982 }, 00:06:50.982 { 00:06:50.982 "aliases": [ 00:06:50.982 "13702096-764c-5c98-87c2-05f79041862c" 00:06:50.982 ], 00:06:50.982 "assigned_rate_limits": { 00:06:50.982 "r_mbytes_per_sec": 0, 00:06:50.982 "rw_ios_per_sec": 0, 00:06:50.982 "rw_mbytes_per_sec": 0, 00:06:50.982 "w_mbytes_per_sec": 0 00:06:50.982 }, 00:06:50.982 "block_size": 512, 00:06:50.982 "claimed": false, 00:06:50.983 "driver_specific": { 00:06:50.983 "passthru": { 00:06:50.983 "base_bdev_name": "Malloc3", 00:06:50.983 "name": "Passthru0" 00:06:50.983 } 00:06:50.983 }, 00:06:50.983 "memory_domains": [ 00:06:50.983 { 00:06:50.983 "dma_device_id": "system", 00:06:50.983 "dma_device_type": 1 00:06:50.983 }, 00:06:50.983 { 00:06:50.983 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:50.983 "dma_device_type": 2 00:06:50.983 } 00:06:50.983 ], 00:06:50.983 "name": "Passthru0", 00:06:50.983 "num_blocks": 16384, 00:06:50.983 "product_name": "passthru", 00:06:50.983 "supported_io_types": { 00:06:50.983 "abort": true, 00:06:50.983 "compare": false, 00:06:50.983 "compare_and_write": false, 00:06:50.983 "copy": true, 00:06:50.983 "flush": true, 00:06:50.983 "get_zone_info": false, 00:06:50.983 "nvme_admin": false, 00:06:50.983 "nvme_io": false, 00:06:50.983 "nvme_io_md": false, 00:06:50.983 "nvme_iov_md": false, 00:06:50.983 "read": true, 00:06:50.983 "reset": true, 00:06:50.983 "seek_data": false, 00:06:50.983 "seek_hole": false, 00:06:50.983 "unmap": true, 00:06:50.983 "write": true, 00:06:50.983 "write_zeroes": true, 00:06:50.983 "zcopy": true, 00:06:50.983 "zone_append": false, 00:06:50.983 "zone_management": false 00:06:50.983 }, 00:06:50.983 "uuid": "13702096-764c-5c98-87c2-05f79041862c", 00:06:50.983 "zoned": false 00:06:50.983 } 00:06:50.983 ]' 00:06:50.983 11:19:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:50.983 11:19:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:50.983 11:19:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:50.983 11:19:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.983 11:19:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:50.983 11:19:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.983 11:19:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:06:50.983 11:19:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.983 11:19:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:50.983 11:19:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.983 11:19:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:50.983 11:19:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.983 11:19:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:50.983 11:19:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.983 11:19:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:50.983 11:19:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:50.983 11:19:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:50.983 00:06:50.983 real 0m0.333s 00:06:50.983 user 0m0.225s 00:06:50.983 sys 0m0.034s 00:06:50.983 11:19:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.983 ************************************ 00:06:50.983 11:19:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:50.983 END TEST rpc_daemon_integrity 00:06:50.983 ************************************ 00:06:51.242 11:19:36 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:51.242 11:19:36 rpc -- rpc/rpc.sh@84 -- # killprocess 58635 00:06:51.242 11:19:36 rpc -- common/autotest_common.sh@954 -- # '[' -z 58635 ']' 00:06:51.242 11:19:36 rpc -- common/autotest_common.sh@958 -- # kill -0 58635 00:06:51.242 11:19:36 rpc -- common/autotest_common.sh@959 -- # uname 00:06:51.242 11:19:36 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:51.242 11:19:36 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58635 00:06:51.242 11:19:36 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:51.242 11:19:36 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:51.242 killing process with pid 58635 00:06:51.242 11:19:36 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58635' 00:06:51.242 11:19:36 rpc -- common/autotest_common.sh@973 -- # kill 58635 00:06:51.242 11:19:36 rpc -- common/autotest_common.sh@978 -- # wait 58635 00:06:51.500 00:06:51.500 real 0m2.608s 00:06:51.500 user 0m3.620s 00:06:51.500 sys 0m0.632s 00:06:51.500 11:19:36 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.500 11:19:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.500 ************************************ 00:06:51.500 END TEST rpc 00:06:51.500 ************************************ 00:06:51.500 11:19:36 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:51.500 11:19:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:51.500 11:19:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.500 11:19:36 -- common/autotest_common.sh@10 -- # set +x 00:06:51.500 ************************************ 00:06:51.500 START TEST skip_rpc 00:06:51.500 ************************************ 00:06:51.500 11:19:36 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:51.500 * Looking for test storage... 00:06:51.500 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:51.500 11:19:36 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:51.500 11:19:36 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:51.500 11:19:36 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:51.759 11:19:37 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:51.759 11:19:37 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:51.759 11:19:37 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:51.759 11:19:37 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:51.759 11:19:37 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:51.759 11:19:37 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:51.759 11:19:37 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:51.759 11:19:37 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:51.759 11:19:37 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:51.759 11:19:37 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:51.759 11:19:37 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:51.759 11:19:37 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:51.759 11:19:37 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:51.759 11:19:37 skip_rpc -- scripts/common.sh@345 -- # : 1 00:06:51.759 11:19:37 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:51.759 11:19:37 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:51.759 11:19:37 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:51.759 11:19:37 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:06:51.759 11:19:37 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:51.759 11:19:37 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:06:51.759 11:19:37 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:51.759 11:19:37 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:51.759 11:19:37 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:06:51.759 11:19:37 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:51.759 11:19:37 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:06:51.759 11:19:37 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:51.759 11:19:37 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:51.759 11:19:37 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:51.759 11:19:37 skip_rpc -- scripts/common.sh@368 -- # return 0 00:06:51.759 11:19:37 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:51.759 11:19:37 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:51.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.759 --rc genhtml_branch_coverage=1 00:06:51.759 --rc genhtml_function_coverage=1 00:06:51.759 --rc genhtml_legend=1 00:06:51.759 --rc geninfo_all_blocks=1 00:06:51.759 --rc geninfo_unexecuted_blocks=1 00:06:51.759 00:06:51.759 ' 00:06:51.759 11:19:37 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:51.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.759 --rc genhtml_branch_coverage=1 00:06:51.759 --rc genhtml_function_coverage=1 00:06:51.759 --rc genhtml_legend=1 00:06:51.759 --rc geninfo_all_blocks=1 00:06:51.759 --rc geninfo_unexecuted_blocks=1 00:06:51.759 00:06:51.759 ' 00:06:51.759 11:19:37 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:51.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.759 --rc genhtml_branch_coverage=1 00:06:51.759 --rc genhtml_function_coverage=1 00:06:51.759 --rc genhtml_legend=1 00:06:51.759 --rc geninfo_all_blocks=1 00:06:51.759 --rc geninfo_unexecuted_blocks=1 00:06:51.759 00:06:51.759 ' 00:06:51.759 11:19:37 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:51.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.759 --rc genhtml_branch_coverage=1 00:06:51.759 --rc genhtml_function_coverage=1 00:06:51.759 --rc genhtml_legend=1 00:06:51.759 --rc geninfo_all_blocks=1 00:06:51.759 --rc geninfo_unexecuted_blocks=1 00:06:51.759 00:06:51.759 ' 00:06:51.759 11:19:37 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:51.759 11:19:37 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:51.759 11:19:37 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:51.759 11:19:37 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:51.759 11:19:37 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.759 11:19:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.759 ************************************ 00:06:51.759 START TEST skip_rpc 00:06:51.759 ************************************ 00:06:51.759 11:19:37 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:06:51.759 11:19:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58891 00:06:51.759 11:19:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:51.759 11:19:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:51.759 11:19:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:51.759 [2024-11-20 11:19:37.224603] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:06:51.759 [2024-11-20 11:19:37.224794] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58891 ] 00:06:52.017 [2024-11-20 11:19:37.394263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.017 [2024-11-20 11:19:37.443562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.285 11:19:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:57.285 11:19:42 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:57.285 11:19:42 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:57.285 11:19:42 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:57.285 11:19:42 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:57.285 11:19:42 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:57.285 11:19:42 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:57.285 11:19:42 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:06:57.285 11:19:42 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.285 11:19:42 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.285 2024/11/20 11:19:42 error on client creation, err: error during client creation for Unix socket, err: could not connect to a Unix socket on address /var/tmp/spdk.sock, err: dial unix /var/tmp/spdk.sock: connect: no such file or directory 00:06:57.285 11:19:42 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:57.285 11:19:42 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:57.285 11:19:42 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:57.285 11:19:42 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:57.285 11:19:42 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:57.285 11:19:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:57.285 11:19:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58891 00:06:57.285 11:19:42 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 58891 ']' 00:06:57.285 11:19:42 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 58891 00:06:57.285 11:19:42 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:06:57.285 11:19:42 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:57.285 11:19:42 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58891 00:06:57.285 11:19:42 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:57.285 11:19:42 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:57.285 killing process with pid 58891 00:06:57.285 11:19:42 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58891' 00:06:57.285 11:19:42 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 58891 00:06:57.285 11:19:42 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 58891 00:06:57.285 00:06:57.285 real 0m5.323s 00:06:57.285 user 0m5.008s 00:06:57.285 sys 0m0.200s 00:06:57.285 11:19:42 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:57.285 11:19:42 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.285 ************************************ 00:06:57.285 END TEST skip_rpc 00:06:57.285 ************************************ 00:06:57.285 11:19:42 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:57.285 11:19:42 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:57.285 11:19:42 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.285 11:19:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.285 ************************************ 00:06:57.285 START TEST skip_rpc_with_json 00:06:57.285 ************************************ 00:06:57.285 11:19:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:06:57.285 11:19:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:57.285 11:19:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58978 00:06:57.285 11:19:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:57.285 11:19:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:57.285 11:19:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58978 00:06:57.285 11:19:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 58978 ']' 00:06:57.285 11:19:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.285 11:19:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:57.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.285 11:19:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.285 11:19:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:57.285 11:19:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:57.285 [2024-11-20 11:19:42.562755] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:06:57.285 [2024-11-20 11:19:42.562877] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58978 ] 00:06:57.285 [2024-11-20 11:19:42.714373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.285 [2024-11-20 11:19:42.761260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.544 11:19:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:57.544 11:19:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:06:57.544 11:19:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:57.544 11:19:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.545 11:19:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:57.545 [2024-11-20 11:19:42.960957] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:57.545 2024/11/20 11:19:42 error on JSON-RPC call, method: nvmf_get_transports, params: map[trtype:tcp], err: error received for nvmf_get_transports method, err: Code=-19 Msg=No such device 00:06:57.545 request: 00:06:57.545 { 00:06:57.545 "method": "nvmf_get_transports", 00:06:57.545 "params": { 00:06:57.545 "trtype": "tcp" 00:06:57.545 } 00:06:57.545 } 00:06:57.545 Got JSON-RPC error response 00:06:57.545 GoRPCClient: error on JSON-RPC call 00:06:57.545 11:19:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:57.545 11:19:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:57.545 11:19:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.545 11:19:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:57.545 [2024-11-20 11:19:42.973128] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:57.545 11:19:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.545 11:19:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:57.545 11:19:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.545 11:19:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:57.803 11:19:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.803 11:19:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:57.803 { 00:06:57.803 "subsystems": [ 00:06:57.803 { 00:06:57.803 "subsystem": "fsdev", 00:06:57.803 "config": [ 00:06:57.803 { 00:06:57.803 "method": "fsdev_set_opts", 00:06:57.803 "params": { 00:06:57.803 "fsdev_io_cache_size": 256, 00:06:57.803 "fsdev_io_pool_size": 65535 00:06:57.803 } 00:06:57.803 } 00:06:57.803 ] 00:06:57.803 }, 00:06:57.804 { 00:06:57.804 "subsystem": "keyring", 00:06:57.804 "config": [] 00:06:57.804 }, 00:06:57.804 { 00:06:57.804 "subsystem": "iobuf", 00:06:57.804 "config": [ 00:06:57.804 { 00:06:57.804 "method": "iobuf_set_options", 00:06:57.804 "params": { 00:06:57.804 "enable_numa": false, 00:06:57.804 "large_bufsize": 135168, 00:06:57.804 "large_pool_count": 1024, 00:06:57.804 "small_bufsize": 8192, 00:06:57.804 "small_pool_count": 8192 00:06:57.804 } 00:06:57.804 } 00:06:57.804 ] 00:06:57.804 }, 00:06:57.804 { 00:06:57.804 "subsystem": "sock", 00:06:57.804 "config": [ 00:06:57.804 { 00:06:57.804 "method": "sock_set_default_impl", 00:06:57.804 "params": { 00:06:57.804 "impl_name": "posix" 00:06:57.804 } 00:06:57.804 }, 00:06:57.804 { 00:06:57.804 "method": "sock_impl_set_options", 00:06:57.804 "params": { 00:06:57.804 "enable_ktls": false, 00:06:57.804 "enable_placement_id": 0, 00:06:57.804 "enable_quickack": false, 00:06:57.804 "enable_recv_pipe": true, 00:06:57.804 "enable_zerocopy_send_client": false, 00:06:57.804 "enable_zerocopy_send_server": true, 00:06:57.804 "impl_name": "ssl", 00:06:57.804 "recv_buf_size": 4096, 00:06:57.804 "send_buf_size": 4096, 00:06:57.804 "tls_version": 0, 00:06:57.804 "zerocopy_threshold": 0 00:06:57.804 } 00:06:57.804 }, 00:06:57.804 { 00:06:57.804 "method": "sock_impl_set_options", 00:06:57.804 "params": { 00:06:57.804 "enable_ktls": false, 00:06:57.804 "enable_placement_id": 0, 00:06:57.804 "enable_quickack": false, 00:06:57.804 "enable_recv_pipe": true, 00:06:57.804 "enable_zerocopy_send_client": false, 00:06:57.804 "enable_zerocopy_send_server": true, 00:06:57.804 "impl_name": "posix", 00:06:57.804 "recv_buf_size": 2097152, 00:06:57.804 "send_buf_size": 2097152, 00:06:57.804 "tls_version": 0, 00:06:57.804 "zerocopy_threshold": 0 00:06:57.804 } 00:06:57.804 } 00:06:57.804 ] 00:06:57.804 }, 00:06:57.804 { 00:06:57.804 "subsystem": "vmd", 00:06:57.804 "config": [] 00:06:57.804 }, 00:06:57.804 { 00:06:57.804 "subsystem": "accel", 00:06:57.804 "config": [ 00:06:57.804 { 00:06:57.804 "method": "accel_set_options", 00:06:57.804 "params": { 00:06:57.804 "buf_count": 2048, 00:06:57.804 "large_cache_size": 16, 00:06:57.804 "sequence_count": 2048, 00:06:57.804 "small_cache_size": 128, 00:06:57.804 "task_count": 2048 00:06:57.804 } 00:06:57.804 } 00:06:57.804 ] 00:06:57.804 }, 00:06:57.804 { 00:06:57.804 "subsystem": "bdev", 00:06:57.804 "config": [ 00:06:57.804 { 00:06:57.804 "method": "bdev_set_options", 00:06:57.804 "params": { 00:06:57.804 "bdev_auto_examine": true, 00:06:57.804 "bdev_io_cache_size": 256, 00:06:57.804 "bdev_io_pool_size": 65535, 00:06:57.804 "iobuf_large_cache_size": 16, 00:06:57.804 "iobuf_small_cache_size": 128 00:06:57.804 } 00:06:57.804 }, 00:06:57.804 { 00:06:57.804 "method": "bdev_raid_set_options", 00:06:57.804 "params": { 00:06:57.804 "process_max_bandwidth_mb_sec": 0, 00:06:57.804 "process_window_size_kb": 1024 00:06:57.804 } 00:06:57.804 }, 00:06:57.804 { 00:06:57.804 "method": "bdev_iscsi_set_options", 00:06:57.804 "params": { 00:06:57.804 "timeout_sec": 30 00:06:57.804 } 00:06:57.804 }, 00:06:57.804 { 00:06:57.804 "method": "bdev_nvme_set_options", 00:06:57.804 "params": { 00:06:57.804 "action_on_timeout": "none", 00:06:57.804 "allow_accel_sequence": false, 00:06:57.804 "arbitration_burst": 0, 00:06:57.804 "bdev_retry_count": 3, 00:06:57.804 "ctrlr_loss_timeout_sec": 0, 00:06:57.804 "delay_cmd_submit": true, 00:06:57.804 "dhchap_dhgroups": [ 00:06:57.804 "null", 00:06:57.804 "ffdhe2048", 00:06:57.804 "ffdhe3072", 00:06:57.804 "ffdhe4096", 00:06:57.804 "ffdhe6144", 00:06:57.804 "ffdhe8192" 00:06:57.804 ], 00:06:57.804 "dhchap_digests": [ 00:06:57.804 "sha256", 00:06:57.804 "sha384", 00:06:57.804 "sha512" 00:06:57.804 ], 00:06:57.804 "disable_auto_failback": false, 00:06:57.804 "fast_io_fail_timeout_sec": 0, 00:06:57.804 "generate_uuids": false, 00:06:57.804 "high_priority_weight": 0, 00:06:57.804 "io_path_stat": false, 00:06:57.804 "io_queue_requests": 0, 00:06:57.804 "keep_alive_timeout_ms": 10000, 00:06:57.804 "low_priority_weight": 0, 00:06:57.804 "medium_priority_weight": 0, 00:06:57.804 "nvme_adminq_poll_period_us": 10000, 00:06:57.804 "nvme_error_stat": false, 00:06:57.804 "nvme_ioq_poll_period_us": 0, 00:06:57.804 "rdma_cm_event_timeout_ms": 0, 00:06:57.804 "rdma_max_cq_size": 0, 00:06:57.804 "rdma_srq_size": 0, 00:06:57.804 "reconnect_delay_sec": 0, 00:06:57.804 "timeout_admin_us": 0, 00:06:57.804 "timeout_us": 0, 00:06:57.804 "transport_ack_timeout": 0, 00:06:57.804 "transport_retry_count": 4, 00:06:57.804 "transport_tos": 0 00:06:57.804 } 00:06:57.804 }, 00:06:57.804 { 00:06:57.804 "method": "bdev_nvme_set_hotplug", 00:06:57.804 "params": { 00:06:57.804 "enable": false, 00:06:57.804 "period_us": 100000 00:06:57.804 } 00:06:57.804 }, 00:06:57.804 { 00:06:57.804 "method": "bdev_wait_for_examine" 00:06:57.804 } 00:06:57.804 ] 00:06:57.804 }, 00:06:57.804 { 00:06:57.804 "subsystem": "scsi", 00:06:57.804 "config": null 00:06:57.804 }, 00:06:57.804 { 00:06:57.804 "subsystem": "scheduler", 00:06:57.804 "config": [ 00:06:57.804 { 00:06:57.804 "method": "framework_set_scheduler", 00:06:57.804 "params": { 00:06:57.804 "name": "static" 00:06:57.804 } 00:06:57.804 } 00:06:57.804 ] 00:06:57.804 }, 00:06:57.804 { 00:06:57.804 "subsystem": "vhost_scsi", 00:06:57.804 "config": [] 00:06:57.804 }, 00:06:57.804 { 00:06:57.804 "subsystem": "vhost_blk", 00:06:57.804 "config": [] 00:06:57.804 }, 00:06:57.804 { 00:06:57.804 "subsystem": "ublk", 00:06:57.804 "config": [] 00:06:57.804 }, 00:06:57.804 { 00:06:57.804 "subsystem": "nbd", 00:06:57.804 "config": [] 00:06:57.804 }, 00:06:57.804 { 00:06:57.804 "subsystem": "nvmf", 00:06:57.804 "config": [ 00:06:57.804 { 00:06:57.804 "method": "nvmf_set_config", 00:06:57.804 "params": { 00:06:57.804 "admin_cmd_passthru": { 00:06:57.804 "identify_ctrlr": false 00:06:57.804 }, 00:06:57.804 "dhchap_dhgroups": [ 00:06:57.804 "null", 00:06:57.804 "ffdhe2048", 00:06:57.804 "ffdhe3072", 00:06:57.804 "ffdhe4096", 00:06:57.804 "ffdhe6144", 00:06:57.804 "ffdhe8192" 00:06:57.804 ], 00:06:57.804 "dhchap_digests": [ 00:06:57.804 "sha256", 00:06:57.804 "sha384", 00:06:57.804 "sha512" 00:06:57.804 ], 00:06:57.804 "discovery_filter": "match_any" 00:06:57.804 } 00:06:57.804 }, 00:06:57.804 { 00:06:57.804 "method": "nvmf_set_max_subsystems", 00:06:57.804 "params": { 00:06:57.804 "max_subsystems": 1024 00:06:57.804 } 00:06:57.804 }, 00:06:57.804 { 00:06:57.804 "method": "nvmf_set_crdt", 00:06:57.804 "params": { 00:06:57.804 "crdt1": 0, 00:06:57.804 "crdt2": 0, 00:06:57.804 "crdt3": 0 00:06:57.804 } 00:06:57.804 }, 00:06:57.804 { 00:06:57.804 "method": "nvmf_create_transport", 00:06:57.804 "params": { 00:06:57.804 "abort_timeout_sec": 1, 00:06:57.804 "ack_timeout": 0, 00:06:57.804 "buf_cache_size": 4294967295, 00:06:57.804 "c2h_success": true, 00:06:57.804 "data_wr_pool_size": 0, 00:06:57.804 "dif_insert_or_strip": false, 00:06:57.804 "in_capsule_data_size": 4096, 00:06:57.804 "io_unit_size": 131072, 00:06:57.804 "max_aq_depth": 128, 00:06:57.804 "max_io_qpairs_per_ctrlr": 127, 00:06:57.804 "max_io_size": 131072, 00:06:57.804 "max_queue_depth": 128, 00:06:57.804 "num_shared_buffers": 511, 00:06:57.804 "sock_priority": 0, 00:06:57.804 "trtype": "TCP", 00:06:57.804 "zcopy": false 00:06:57.804 } 00:06:57.804 } 00:06:57.804 ] 00:06:57.804 }, 00:06:57.804 { 00:06:57.804 "subsystem": "iscsi", 00:06:57.804 "config": [ 00:06:57.804 { 00:06:57.804 "method": "iscsi_set_options", 00:06:57.804 "params": { 00:06:57.804 "allow_duplicated_isid": false, 00:06:57.804 "chap_group": 0, 00:06:57.804 "data_out_pool_size": 2048, 00:06:57.804 "default_time2retain": 20, 00:06:57.804 "default_time2wait": 2, 00:06:57.804 "disable_chap": false, 00:06:57.804 "error_recovery_level": 0, 00:06:57.804 "first_burst_length": 8192, 00:06:57.804 "immediate_data": true, 00:06:57.804 "immediate_data_pool_size": 16384, 00:06:57.804 "max_connections_per_session": 2, 00:06:57.804 "max_large_datain_per_connection": 64, 00:06:57.804 "max_queue_depth": 64, 00:06:57.804 "max_r2t_per_connection": 4, 00:06:57.804 "max_sessions": 128, 00:06:57.804 "mutual_chap": false, 00:06:57.804 "node_base": "iqn.2016-06.io.spdk", 00:06:57.804 "nop_in_interval": 30, 00:06:57.804 "nop_timeout": 60, 00:06:57.804 "pdu_pool_size": 36864, 00:06:57.804 "require_chap": false 00:06:57.804 } 00:06:57.804 } 00:06:57.804 ] 00:06:57.804 } 00:06:57.805 ] 00:06:57.805 } 00:06:57.805 11:19:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:57.805 11:19:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58978 00:06:57.805 11:19:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58978 ']' 00:06:57.805 11:19:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58978 00:06:57.805 11:19:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:57.805 11:19:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:57.805 11:19:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58978 00:06:57.805 11:19:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:57.805 11:19:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:57.805 killing process with pid 58978 00:06:57.805 11:19:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58978' 00:06:57.805 11:19:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58978 00:06:57.805 11:19:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58978 00:06:58.063 11:19:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=59004 00:06:58.063 11:19:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:58.063 11:19:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:07:03.331 11:19:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 59004 00:07:03.331 11:19:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 59004 ']' 00:07:03.331 11:19:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 59004 00:07:03.331 11:19:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:07:03.331 11:19:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:03.331 11:19:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59004 00:07:03.331 11:19:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:03.331 11:19:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:03.331 11:19:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59004' 00:07:03.331 killing process with pid 59004 00:07:03.331 11:19:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 59004 00:07:03.331 11:19:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 59004 00:07:03.331 11:19:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:03.331 11:19:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:03.331 00:07:03.331 real 0m6.272s 00:07:03.331 user 0m5.973s 00:07:03.331 sys 0m0.483s 00:07:03.331 11:19:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:03.331 11:19:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:03.331 ************************************ 00:07:03.331 END TEST skip_rpc_with_json 00:07:03.331 ************************************ 00:07:03.331 11:19:48 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:07:03.331 11:19:48 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:03.331 11:19:48 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:03.331 11:19:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.331 ************************************ 00:07:03.331 START TEST skip_rpc_with_delay 00:07:03.331 ************************************ 00:07:03.331 11:19:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:07:03.331 11:19:48 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:03.331 11:19:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:07:03.331 11:19:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:03.331 11:19:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:03.331 11:19:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:03.331 11:19:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:03.331 11:19:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:03.331 11:19:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:03.331 11:19:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:03.331 11:19:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:03.331 11:19:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:07:03.331 11:19:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:03.331 [2024-11-20 11:19:48.907277] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:07:03.590 11:19:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:07:03.590 11:19:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:03.590 11:19:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:03.590 11:19:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:03.590 00:07:03.590 real 0m0.104s 00:07:03.590 user 0m0.061s 00:07:03.590 sys 0m0.039s 00:07:03.590 11:19:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:03.590 11:19:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:07:03.590 ************************************ 00:07:03.590 END TEST skip_rpc_with_delay 00:07:03.590 ************************************ 00:07:03.590 11:19:48 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:07:03.590 11:19:48 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:07:03.590 11:19:48 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:07:03.590 11:19:48 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:03.590 11:19:48 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:03.590 11:19:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.590 ************************************ 00:07:03.590 START TEST exit_on_failed_rpc_init 00:07:03.590 ************************************ 00:07:03.590 11:19:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:07:03.590 11:19:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=59113 00:07:03.590 11:19:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:03.590 11:19:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 59113 00:07:03.590 11:19:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 59113 ']' 00:07:03.590 11:19:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.590 11:19:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:03.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.590 11:19:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.590 11:19:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:03.590 11:19:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:03.590 [2024-11-20 11:19:49.062768] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:07:03.590 [2024-11-20 11:19:49.062893] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59113 ] 00:07:03.847 [2024-11-20 11:19:49.212505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.847 [2024-11-20 11:19:49.270365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.107 11:19:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:04.107 11:19:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:07:04.107 11:19:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:04.107 11:19:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:04.107 11:19:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:07:04.107 11:19:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:04.107 11:19:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:04.107 11:19:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:04.107 11:19:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:04.107 11:19:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:04.107 11:19:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:04.107 11:19:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:04.107 11:19:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:04.107 11:19:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:07:04.107 11:19:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:04.107 [2024-11-20 11:19:49.584246] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:07:04.107 [2024-11-20 11:19:49.584389] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59130 ] 00:07:04.366 [2024-11-20 11:19:49.735537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.366 [2024-11-20 11:19:49.786105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:04.366 [2024-11-20 11:19:49.786231] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:07:04.366 [2024-11-20 11:19:49.786254] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:07:04.366 [2024-11-20 11:19:49.786266] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:04.366 11:19:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:07:04.366 11:19:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:04.366 11:19:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:07:04.367 11:19:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:07:04.367 11:19:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:07:04.367 11:19:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:04.367 11:19:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:04.367 11:19:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 59113 00:07:04.367 11:19:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 59113 ']' 00:07:04.367 11:19:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 59113 00:07:04.367 11:19:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:07:04.367 11:19:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:04.367 11:19:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59113 00:07:04.367 11:19:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:04.367 11:19:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:04.367 killing process with pid 59113 00:07:04.367 11:19:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59113' 00:07:04.367 11:19:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 59113 00:07:04.367 11:19:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 59113 00:07:04.625 00:07:04.625 real 0m1.192s 00:07:04.625 user 0m1.524s 00:07:04.625 sys 0m0.327s 00:07:04.625 11:19:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.625 11:19:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:04.625 ************************************ 00:07:04.625 END TEST exit_on_failed_rpc_init 00:07:04.625 ************************************ 00:07:04.884 11:19:50 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:04.884 00:07:04.884 real 0m13.296s 00:07:04.884 user 0m12.758s 00:07:04.884 sys 0m1.241s 00:07:04.884 11:19:50 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.884 11:19:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.884 ************************************ 00:07:04.884 END TEST skip_rpc 00:07:04.884 ************************************ 00:07:04.884 11:19:50 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:04.884 11:19:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:04.884 11:19:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.884 11:19:50 -- common/autotest_common.sh@10 -- # set +x 00:07:04.884 ************************************ 00:07:04.884 START TEST rpc_client 00:07:04.884 ************************************ 00:07:04.884 11:19:50 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:04.884 * Looking for test storage... 00:07:04.884 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:07:04.884 11:19:50 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:04.884 11:19:50 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:07:04.884 11:19:50 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:04.884 11:19:50 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:04.884 11:19:50 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:04.884 11:19:50 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:04.884 11:19:50 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:04.884 11:19:50 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:07:04.884 11:19:50 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:07:04.884 11:19:50 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:07:04.884 11:19:50 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:07:04.884 11:19:50 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:07:04.884 11:19:50 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:07:04.884 11:19:50 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:07:04.884 11:19:50 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:04.884 11:19:50 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:07:04.884 11:19:50 rpc_client -- scripts/common.sh@345 -- # : 1 00:07:04.884 11:19:50 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:04.884 11:19:50 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:04.884 11:19:50 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:07:04.884 11:19:50 rpc_client -- scripts/common.sh@353 -- # local d=1 00:07:04.884 11:19:50 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:04.884 11:19:50 rpc_client -- scripts/common.sh@355 -- # echo 1 00:07:04.884 11:19:50 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:07:04.884 11:19:50 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:07:04.884 11:19:50 rpc_client -- scripts/common.sh@353 -- # local d=2 00:07:04.884 11:19:50 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:04.884 11:19:50 rpc_client -- scripts/common.sh@355 -- # echo 2 00:07:04.884 11:19:50 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:07:04.884 11:19:50 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:04.884 11:19:50 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:04.884 11:19:50 rpc_client -- scripts/common.sh@368 -- # return 0 00:07:04.884 11:19:50 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:04.884 11:19:50 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:04.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.884 --rc genhtml_branch_coverage=1 00:07:04.884 --rc genhtml_function_coverage=1 00:07:04.884 --rc genhtml_legend=1 00:07:04.884 --rc geninfo_all_blocks=1 00:07:04.884 --rc geninfo_unexecuted_blocks=1 00:07:04.884 00:07:04.884 ' 00:07:04.884 11:19:50 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:04.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.884 --rc genhtml_branch_coverage=1 00:07:04.884 --rc genhtml_function_coverage=1 00:07:04.884 --rc genhtml_legend=1 00:07:04.884 --rc geninfo_all_blocks=1 00:07:04.884 --rc geninfo_unexecuted_blocks=1 00:07:04.884 00:07:04.884 ' 00:07:04.884 11:19:50 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:04.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.884 --rc genhtml_branch_coverage=1 00:07:04.884 --rc genhtml_function_coverage=1 00:07:04.884 --rc genhtml_legend=1 00:07:04.884 --rc geninfo_all_blocks=1 00:07:04.884 --rc geninfo_unexecuted_blocks=1 00:07:04.884 00:07:04.884 ' 00:07:04.884 11:19:50 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:04.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.884 --rc genhtml_branch_coverage=1 00:07:04.884 --rc genhtml_function_coverage=1 00:07:04.884 --rc genhtml_legend=1 00:07:04.884 --rc geninfo_all_blocks=1 00:07:04.884 --rc geninfo_unexecuted_blocks=1 00:07:04.884 00:07:04.884 ' 00:07:04.884 11:19:50 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:07:05.143 OK 00:07:05.143 11:19:50 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:07:05.143 00:07:05.143 real 0m0.217s 00:07:05.143 user 0m0.140s 00:07:05.143 sys 0m0.084s 00:07:05.143 11:19:50 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:05.143 11:19:50 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:07:05.143 ************************************ 00:07:05.143 END TEST rpc_client 00:07:05.143 ************************************ 00:07:05.143 11:19:50 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:05.143 11:19:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:05.143 11:19:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:05.143 11:19:50 -- common/autotest_common.sh@10 -- # set +x 00:07:05.143 ************************************ 00:07:05.143 START TEST json_config 00:07:05.143 ************************************ 00:07:05.143 11:19:50 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:05.143 11:19:50 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:05.143 11:19:50 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:07:05.143 11:19:50 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:05.402 11:19:50 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:05.402 11:19:50 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:05.402 11:19:50 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:05.402 11:19:50 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:05.402 11:19:50 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:07:05.402 11:19:50 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:07:05.402 11:19:50 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:07:05.402 11:19:50 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:07:05.402 11:19:50 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:07:05.402 11:19:50 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:07:05.402 11:19:50 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:07:05.402 11:19:50 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:05.402 11:19:50 json_config -- scripts/common.sh@344 -- # case "$op" in 00:07:05.402 11:19:50 json_config -- scripts/common.sh@345 -- # : 1 00:07:05.402 11:19:50 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:05.402 11:19:50 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:05.402 11:19:50 json_config -- scripts/common.sh@365 -- # decimal 1 00:07:05.402 11:19:50 json_config -- scripts/common.sh@353 -- # local d=1 00:07:05.402 11:19:50 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:05.402 11:19:50 json_config -- scripts/common.sh@355 -- # echo 1 00:07:05.402 11:19:50 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:07:05.402 11:19:50 json_config -- scripts/common.sh@366 -- # decimal 2 00:07:05.402 11:19:50 json_config -- scripts/common.sh@353 -- # local d=2 00:07:05.402 11:19:50 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:05.402 11:19:50 json_config -- scripts/common.sh@355 -- # echo 2 00:07:05.402 11:19:50 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:07:05.402 11:19:50 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:05.402 11:19:50 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:05.402 11:19:50 json_config -- scripts/common.sh@368 -- # return 0 00:07:05.402 11:19:50 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:05.402 11:19:50 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:05.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.402 --rc genhtml_branch_coverage=1 00:07:05.402 --rc genhtml_function_coverage=1 00:07:05.402 --rc genhtml_legend=1 00:07:05.402 --rc geninfo_all_blocks=1 00:07:05.402 --rc geninfo_unexecuted_blocks=1 00:07:05.402 00:07:05.402 ' 00:07:05.402 11:19:50 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:05.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.402 --rc genhtml_branch_coverage=1 00:07:05.402 --rc genhtml_function_coverage=1 00:07:05.402 --rc genhtml_legend=1 00:07:05.402 --rc geninfo_all_blocks=1 00:07:05.402 --rc geninfo_unexecuted_blocks=1 00:07:05.402 00:07:05.402 ' 00:07:05.402 11:19:50 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:05.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.402 --rc genhtml_branch_coverage=1 00:07:05.402 --rc genhtml_function_coverage=1 00:07:05.402 --rc genhtml_legend=1 00:07:05.402 --rc geninfo_all_blocks=1 00:07:05.402 --rc geninfo_unexecuted_blocks=1 00:07:05.402 00:07:05.402 ' 00:07:05.402 11:19:50 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:05.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.402 --rc genhtml_branch_coverage=1 00:07:05.402 --rc genhtml_function_coverage=1 00:07:05.402 --rc genhtml_legend=1 00:07:05.402 --rc geninfo_all_blocks=1 00:07:05.402 --rc geninfo_unexecuted_blocks=1 00:07:05.402 00:07:05.402 ' 00:07:05.402 11:19:50 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:05.402 11:19:50 json_config -- nvmf/common.sh@7 -- # uname -s 00:07:05.402 11:19:50 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:05.402 11:19:50 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:05.402 11:19:50 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:05.402 11:19:50 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:05.402 11:19:50 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:05.402 11:19:50 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:05.403 11:19:50 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:05.403 11:19:50 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:05.403 11:19:50 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:05.403 11:19:50 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:05.403 11:19:50 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:07:05.403 11:19:50 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=806d442c-08ba-4719-b76d-33169283459a 00:07:05.403 11:19:50 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:05.403 11:19:50 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:05.403 11:19:50 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:05.403 11:19:50 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:05.403 11:19:50 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:05.403 11:19:50 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:07:05.403 11:19:50 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:05.403 11:19:50 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:05.403 11:19:50 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:05.403 11:19:50 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.403 11:19:50 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.403 11:19:50 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.403 11:19:50 json_config -- paths/export.sh@5 -- # export PATH 00:07:05.403 11:19:50 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.403 11:19:50 json_config -- nvmf/common.sh@51 -- # : 0 00:07:05.403 11:19:50 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:05.403 11:19:50 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:05.403 11:19:50 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:05.403 11:19:50 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:05.403 11:19:50 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:05.403 11:19:50 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:05.403 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:05.403 11:19:50 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:05.403 11:19:50 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:05.403 11:19:50 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:05.403 11:19:50 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:07:05.403 11:19:50 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:07:05.403 11:19:50 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:07:05.403 11:19:50 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:07:05.403 11:19:50 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:07:05.403 11:19:50 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:07:05.403 11:19:50 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:07:05.403 11:19:50 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:07:05.403 11:19:50 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:07:05.403 11:19:50 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:07:05.403 11:19:50 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:07:05.403 11:19:50 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:07:05.403 11:19:50 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:07:05.403 11:19:50 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:07:05.403 11:19:50 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:05.403 INFO: JSON configuration test init 00:07:05.403 11:19:50 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:07:05.403 11:19:50 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:07:05.403 11:19:50 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:07:05.403 11:19:50 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:05.403 11:19:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:05.403 11:19:50 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:07:05.403 11:19:50 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:05.403 11:19:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:05.403 11:19:50 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:07:05.403 11:19:50 json_config -- json_config/common.sh@9 -- # local app=target 00:07:05.403 11:19:50 json_config -- json_config/common.sh@10 -- # shift 00:07:05.403 11:19:50 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:05.403 11:19:50 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:05.403 11:19:50 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:05.403 11:19:50 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:05.403 11:19:50 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:05.403 11:19:50 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59264 00:07:05.403 Waiting for target to run... 00:07:05.403 11:19:50 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:07:05.403 11:19:50 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:05.403 11:19:50 json_config -- json_config/common.sh@25 -- # waitforlisten 59264 /var/tmp/spdk_tgt.sock 00:07:05.403 11:19:50 json_config -- common/autotest_common.sh@835 -- # '[' -z 59264 ']' 00:07:05.403 11:19:50 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:05.403 11:19:50 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:05.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:05.403 11:19:50 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:05.403 11:19:50 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:05.403 11:19:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:05.403 [2024-11-20 11:19:50.872107] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:07:05.403 [2024-11-20 11:19:50.872234] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59264 ] 00:07:05.663 [2024-11-20 11:19:51.206322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.663 [2024-11-20 11:19:51.244526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.600 11:19:52 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:06.600 00:07:06.600 11:19:52 json_config -- common/autotest_common.sh@868 -- # return 0 00:07:06.600 11:19:52 json_config -- json_config/common.sh@26 -- # echo '' 00:07:06.600 11:19:52 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:07:06.600 11:19:52 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:07:06.600 11:19:52 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:06.600 11:19:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:06.600 11:19:52 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:07:06.600 11:19:52 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:07:06.600 11:19:52 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:06.600 11:19:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:06.600 11:19:52 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:07:06.600 11:19:52 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:07:06.600 11:19:52 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:07:07.165 11:19:52 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:07:07.165 11:19:52 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:07:07.165 11:19:52 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:07.165 11:19:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:07.165 11:19:52 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:07:07.165 11:19:52 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:07:07.165 11:19:52 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:07:07.165 11:19:52 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:07:07.165 11:19:52 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:07:07.165 11:19:52 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:07:07.165 11:19:52 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:07:07.165 11:19:52 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:07:07.731 11:19:53 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:07:07.731 11:19:53 json_config -- json_config/json_config.sh@51 -- # local get_types 00:07:07.731 11:19:53 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:07:07.731 11:19:53 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:07:07.731 11:19:53 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:07:07.731 11:19:53 json_config -- json_config/json_config.sh@54 -- # sort 00:07:07.731 11:19:53 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:07:07.731 11:19:53 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:07:07.731 11:19:53 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:07:07.731 11:19:53 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:07:07.731 11:19:53 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:07.731 11:19:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:07.731 11:19:53 json_config -- json_config/json_config.sh@62 -- # return 0 00:07:07.731 11:19:53 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:07:07.731 11:19:53 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:07:07.731 11:19:53 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:07:07.731 11:19:53 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:07:07.732 11:19:53 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:07:07.732 11:19:53 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:07:07.732 11:19:53 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:07.732 11:19:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:07.732 11:19:53 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:07:07.732 11:19:53 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:07:07.732 11:19:53 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:07:07.732 11:19:53 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:07:07.732 11:19:53 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:07:07.990 MallocForNvmf0 00:07:07.990 11:19:53 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:07:07.990 11:19:53 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:07:08.248 MallocForNvmf1 00:07:08.248 11:19:53 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:07:08.248 11:19:53 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:07:08.506 [2024-11-20 11:19:54.019840] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:08.506 11:19:54 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:08.506 11:19:54 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:09.073 11:19:54 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:07:09.073 11:19:54 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:07:09.331 11:19:54 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:07:09.331 11:19:54 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:07:09.589 11:19:55 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:07:09.589 11:19:55 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:07:09.847 [2024-11-20 11:19:55.404755] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:09.847 11:19:55 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:07:09.847 11:19:55 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:09.847 11:19:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:10.105 11:19:55 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:07:10.105 11:19:55 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:10.105 11:19:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:10.105 11:19:55 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:07:10.105 11:19:55 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:10.105 11:19:55 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:10.363 MallocBdevForConfigChangeCheck 00:07:10.363 11:19:55 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:07:10.363 11:19:55 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:10.363 11:19:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:10.363 11:19:55 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:07:10.363 11:19:55 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:10.954 INFO: shutting down applications... 00:07:10.954 11:19:56 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:07:10.954 11:19:56 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:07:10.954 11:19:56 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:07:10.954 11:19:56 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:07:10.954 11:19:56 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:07:11.213 Calling clear_iscsi_subsystem 00:07:11.213 Calling clear_nvmf_subsystem 00:07:11.213 Calling clear_nbd_subsystem 00:07:11.213 Calling clear_ublk_subsystem 00:07:11.213 Calling clear_vhost_blk_subsystem 00:07:11.213 Calling clear_vhost_scsi_subsystem 00:07:11.213 Calling clear_bdev_subsystem 00:07:11.213 11:19:56 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:07:11.213 11:19:56 json_config -- json_config/json_config.sh@350 -- # count=100 00:07:11.213 11:19:56 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:07:11.213 11:19:56 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:11.213 11:19:56 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:07:11.213 11:19:56 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:07:11.780 11:19:57 json_config -- json_config/json_config.sh@352 -- # break 00:07:11.780 11:19:57 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:07:11.780 11:19:57 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:07:11.780 11:19:57 json_config -- json_config/common.sh@31 -- # local app=target 00:07:11.780 11:19:57 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:11.780 11:19:57 json_config -- json_config/common.sh@35 -- # [[ -n 59264 ]] 00:07:11.780 11:19:57 json_config -- json_config/common.sh@38 -- # kill -SIGINT 59264 00:07:11.780 11:19:57 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:11.780 11:19:57 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:11.780 11:19:57 json_config -- json_config/common.sh@41 -- # kill -0 59264 00:07:11.780 11:19:57 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:07:12.347 11:19:57 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:07:12.347 11:19:57 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:12.347 11:19:57 json_config -- json_config/common.sh@41 -- # kill -0 59264 00:07:12.347 11:19:57 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:12.347 11:19:57 json_config -- json_config/common.sh@43 -- # break 00:07:12.347 11:19:57 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:12.347 SPDK target shutdown done 00:07:12.347 11:19:57 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:12.347 INFO: relaunching applications... 00:07:12.347 11:19:57 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:07:12.347 11:19:57 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:12.347 11:19:57 json_config -- json_config/common.sh@9 -- # local app=target 00:07:12.347 11:19:57 json_config -- json_config/common.sh@10 -- # shift 00:07:12.348 11:19:57 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:12.348 11:19:57 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:12.348 11:19:57 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:12.348 11:19:57 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:12.348 11:19:57 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:12.348 11:19:57 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59560 00:07:12.348 11:19:57 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:12.348 Waiting for target to run... 00:07:12.348 11:19:57 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:12.348 11:19:57 json_config -- json_config/common.sh@25 -- # waitforlisten 59560 /var/tmp/spdk_tgt.sock 00:07:12.348 11:19:57 json_config -- common/autotest_common.sh@835 -- # '[' -z 59560 ']' 00:07:12.348 11:19:57 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:12.348 11:19:57 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:12.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:12.348 11:19:57 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:12.348 11:19:57 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:12.348 11:19:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:12.348 [2024-11-20 11:19:57.720970] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:07:12.348 [2024-11-20 11:19:57.721075] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59560 ] 00:07:12.607 [2024-11-20 11:19:58.023330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.607 [2024-11-20 11:19:58.049588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.866 [2024-11-20 11:19:58.375282] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:12.866 [2024-11-20 11:19:58.407392] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:13.432 11:19:58 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:13.432 11:19:58 json_config -- common/autotest_common.sh@868 -- # return 0 00:07:13.432 00:07:13.432 11:19:58 json_config -- json_config/common.sh@26 -- # echo '' 00:07:13.433 11:19:58 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:07:13.433 INFO: Checking if target configuration is the same... 00:07:13.433 11:19:58 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:07:13.433 11:19:58 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:13.433 11:19:58 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:07:13.433 11:19:58 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:13.433 + '[' 2 -ne 2 ']' 00:07:13.433 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:07:13.433 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:07:13.433 + rootdir=/home/vagrant/spdk_repo/spdk 00:07:13.433 +++ basename /dev/fd/62 00:07:13.433 ++ mktemp /tmp/62.XXX 00:07:13.433 + tmp_file_1=/tmp/62.SzU 00:07:13.433 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:13.433 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:13.433 + tmp_file_2=/tmp/spdk_tgt_config.json.ck4 00:07:13.433 + ret=0 00:07:13.433 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:13.691 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:13.691 + diff -u /tmp/62.SzU /tmp/spdk_tgt_config.json.ck4 00:07:13.691 INFO: JSON config files are the same 00:07:13.691 + echo 'INFO: JSON config files are the same' 00:07:13.691 + rm /tmp/62.SzU /tmp/spdk_tgt_config.json.ck4 00:07:13.691 + exit 0 00:07:13.691 11:19:59 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:07:13.691 11:19:59 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:07:13.691 INFO: changing configuration and checking if this can be detected... 00:07:13.691 11:19:59 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:13.691 11:19:59 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:14.261 11:19:59 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:14.261 11:19:59 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:07:14.261 11:19:59 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:14.261 + '[' 2 -ne 2 ']' 00:07:14.261 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:07:14.261 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:07:14.261 + rootdir=/home/vagrant/spdk_repo/spdk 00:07:14.261 +++ basename /dev/fd/62 00:07:14.261 ++ mktemp /tmp/62.XXX 00:07:14.261 + tmp_file_1=/tmp/62.5v6 00:07:14.261 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:14.261 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:14.261 + tmp_file_2=/tmp/spdk_tgt_config.json.o1k 00:07:14.261 + ret=0 00:07:14.261 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:14.829 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:14.829 + diff -u /tmp/62.5v6 /tmp/spdk_tgt_config.json.o1k 00:07:14.829 + ret=1 00:07:14.829 + echo '=== Start of file: /tmp/62.5v6 ===' 00:07:14.829 + cat /tmp/62.5v6 00:07:14.829 + echo '=== End of file: /tmp/62.5v6 ===' 00:07:14.829 + echo '' 00:07:14.829 + echo '=== Start of file: /tmp/spdk_tgt_config.json.o1k ===' 00:07:14.829 + cat /tmp/spdk_tgt_config.json.o1k 00:07:14.829 + echo '=== End of file: /tmp/spdk_tgt_config.json.o1k ===' 00:07:14.829 + echo '' 00:07:14.829 + rm /tmp/62.5v6 /tmp/spdk_tgt_config.json.o1k 00:07:14.829 + exit 1 00:07:14.829 INFO: configuration change detected. 00:07:14.829 11:20:00 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:07:14.829 11:20:00 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:07:14.829 11:20:00 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:07:14.829 11:20:00 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:14.829 11:20:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:14.829 11:20:00 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:07:14.829 11:20:00 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:07:14.829 11:20:00 json_config -- json_config/json_config.sh@324 -- # [[ -n 59560 ]] 00:07:14.829 11:20:00 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:07:14.829 11:20:00 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:07:14.829 11:20:00 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:14.829 11:20:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:14.829 11:20:00 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:07:14.829 11:20:00 json_config -- json_config/json_config.sh@200 -- # uname -s 00:07:14.829 11:20:00 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:07:14.829 11:20:00 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:07:14.829 11:20:00 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:07:14.829 11:20:00 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:07:14.829 11:20:00 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:14.829 11:20:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:14.829 11:20:00 json_config -- json_config/json_config.sh@330 -- # killprocess 59560 00:07:14.829 11:20:00 json_config -- common/autotest_common.sh@954 -- # '[' -z 59560 ']' 00:07:14.829 11:20:00 json_config -- common/autotest_common.sh@958 -- # kill -0 59560 00:07:14.829 11:20:00 json_config -- common/autotest_common.sh@959 -- # uname 00:07:14.829 11:20:00 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:14.829 11:20:00 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59560 00:07:14.829 11:20:00 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:14.829 killing process with pid 59560 00:07:14.829 11:20:00 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:14.829 11:20:00 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59560' 00:07:14.829 11:20:00 json_config -- common/autotest_common.sh@973 -- # kill 59560 00:07:14.829 11:20:00 json_config -- common/autotest_common.sh@978 -- # wait 59560 00:07:15.091 11:20:00 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:15.091 11:20:00 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:07:15.091 11:20:00 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:15.091 11:20:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:15.091 11:20:00 json_config -- json_config/json_config.sh@335 -- # return 0 00:07:15.091 INFO: Success 00:07:15.091 11:20:00 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:07:15.091 ************************************ 00:07:15.091 END TEST json_config 00:07:15.091 ************************************ 00:07:15.091 00:07:15.091 real 0m10.036s 00:07:15.091 user 0m15.206s 00:07:15.091 sys 0m1.663s 00:07:15.091 11:20:00 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:15.091 11:20:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:15.091 11:20:00 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:15.091 11:20:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:15.091 11:20:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:15.091 11:20:00 -- common/autotest_common.sh@10 -- # set +x 00:07:15.091 ************************************ 00:07:15.091 START TEST json_config_extra_key 00:07:15.091 ************************************ 00:07:15.091 11:20:00 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:15.349 11:20:00 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:15.349 11:20:00 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:07:15.349 11:20:00 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:15.349 11:20:00 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:15.349 11:20:00 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:15.349 11:20:00 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:15.349 11:20:00 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:15.349 11:20:00 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:07:15.349 11:20:00 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:07:15.349 11:20:00 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:07:15.349 11:20:00 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:07:15.349 11:20:00 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:07:15.349 11:20:00 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:07:15.349 11:20:00 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:07:15.349 11:20:00 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:15.349 11:20:00 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:07:15.349 11:20:00 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:07:15.349 11:20:00 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:15.349 11:20:00 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:15.349 11:20:00 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:07:15.349 11:20:00 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:07:15.349 11:20:00 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:15.349 11:20:00 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:07:15.349 11:20:00 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:07:15.349 11:20:00 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:07:15.349 11:20:00 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:07:15.349 11:20:00 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:15.349 11:20:00 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:07:15.349 11:20:00 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:07:15.349 11:20:00 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:15.349 11:20:00 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:15.349 11:20:00 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:07:15.349 11:20:00 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:15.349 11:20:00 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:15.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.349 --rc genhtml_branch_coverage=1 00:07:15.349 --rc genhtml_function_coverage=1 00:07:15.349 --rc genhtml_legend=1 00:07:15.349 --rc geninfo_all_blocks=1 00:07:15.349 --rc geninfo_unexecuted_blocks=1 00:07:15.349 00:07:15.349 ' 00:07:15.349 11:20:00 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:15.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.349 --rc genhtml_branch_coverage=1 00:07:15.349 --rc genhtml_function_coverage=1 00:07:15.349 --rc genhtml_legend=1 00:07:15.349 --rc geninfo_all_blocks=1 00:07:15.349 --rc geninfo_unexecuted_blocks=1 00:07:15.349 00:07:15.349 ' 00:07:15.349 11:20:00 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:15.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.349 --rc genhtml_branch_coverage=1 00:07:15.349 --rc genhtml_function_coverage=1 00:07:15.349 --rc genhtml_legend=1 00:07:15.350 --rc geninfo_all_blocks=1 00:07:15.350 --rc geninfo_unexecuted_blocks=1 00:07:15.350 00:07:15.350 ' 00:07:15.350 11:20:00 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:15.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.350 --rc genhtml_branch_coverage=1 00:07:15.350 --rc genhtml_function_coverage=1 00:07:15.350 --rc genhtml_legend=1 00:07:15.350 --rc geninfo_all_blocks=1 00:07:15.350 --rc geninfo_unexecuted_blocks=1 00:07:15.350 00:07:15.350 ' 00:07:15.350 11:20:00 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:15.350 11:20:00 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:07:15.350 11:20:00 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:15.350 11:20:00 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:15.350 11:20:00 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:15.350 11:20:00 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:15.350 11:20:00 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:15.350 11:20:00 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:15.350 11:20:00 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:15.350 11:20:00 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:15.350 11:20:00 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:15.350 11:20:00 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:15.350 11:20:00 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:07:15.350 11:20:00 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=806d442c-08ba-4719-b76d-33169283459a 00:07:15.350 11:20:00 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:15.350 11:20:00 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:15.350 11:20:00 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:15.350 11:20:00 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:15.350 11:20:00 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:15.350 11:20:00 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:07:15.350 11:20:00 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:15.350 11:20:00 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:15.350 11:20:00 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:15.350 11:20:00 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.350 11:20:00 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.350 11:20:00 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.350 11:20:00 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:07:15.350 11:20:00 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.350 11:20:00 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:07:15.350 11:20:00 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:15.350 11:20:00 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:15.350 11:20:00 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:15.350 11:20:00 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:15.350 11:20:00 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:15.350 11:20:00 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:15.350 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:15.350 11:20:00 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:15.350 11:20:00 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:15.350 11:20:00 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:15.350 11:20:00 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:07:15.350 11:20:00 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:07:15.350 11:20:00 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:07:15.350 11:20:00 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:07:15.350 11:20:00 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:07:15.350 11:20:00 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:07:15.350 11:20:00 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:07:15.350 11:20:00 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:07:15.350 11:20:00 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:07:15.350 11:20:00 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:15.350 INFO: launching applications... 00:07:15.350 11:20:00 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:07:15.350 11:20:00 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:15.350 11:20:00 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:07:15.350 11:20:00 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:07:15.350 11:20:00 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:15.350 11:20:00 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:15.350 11:20:00 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:07:15.350 11:20:00 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:15.350 11:20:00 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:15.350 11:20:00 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=59744 00:07:15.350 11:20:00 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:15.350 Waiting for target to run... 00:07:15.350 11:20:00 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:15.350 11:20:00 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 59744 /var/tmp/spdk_tgt.sock 00:07:15.350 11:20:00 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 59744 ']' 00:07:15.350 11:20:00 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:15.350 11:20:00 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:15.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:15.350 11:20:00 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:15.350 11:20:00 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:15.350 11:20:00 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:15.350 [2024-11-20 11:20:00.934244] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:07:15.350 [2024-11-20 11:20:00.934390] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59744 ] 00:07:15.916 [2024-11-20 11:20:01.252168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.916 [2024-11-20 11:20:01.279255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.482 11:20:02 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:16.482 11:20:02 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:07:16.482 00:07:16.482 11:20:02 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:07:16.482 INFO: shutting down applications... 00:07:16.482 11:20:02 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:07:16.482 11:20:02 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:07:16.482 11:20:02 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:07:16.482 11:20:02 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:16.482 11:20:02 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 59744 ]] 00:07:16.482 11:20:02 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 59744 00:07:16.482 11:20:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:16.482 11:20:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:16.482 11:20:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59744 00:07:16.482 11:20:02 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:17.049 11:20:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:17.049 11:20:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:17.049 11:20:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59744 00:07:17.049 11:20:02 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:17.049 11:20:02 json_config_extra_key -- json_config/common.sh@43 -- # break 00:07:17.049 11:20:02 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:17.049 11:20:02 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:17.049 SPDK target shutdown done 00:07:17.049 Success 00:07:17.049 11:20:02 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:07:17.049 00:07:17.049 real 0m1.862s 00:07:17.049 user 0m1.818s 00:07:17.049 sys 0m0.340s 00:07:17.049 11:20:02 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:17.049 ************************************ 00:07:17.049 END TEST json_config_extra_key 00:07:17.049 11:20:02 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:17.049 ************************************ 00:07:17.049 11:20:02 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:17.049 11:20:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:17.049 11:20:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:17.049 11:20:02 -- common/autotest_common.sh@10 -- # set +x 00:07:17.049 ************************************ 00:07:17.049 START TEST alias_rpc 00:07:17.049 ************************************ 00:07:17.049 11:20:02 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:17.049 * Looking for test storage... 00:07:17.049 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:07:17.049 11:20:02 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:17.049 11:20:02 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:07:17.049 11:20:02 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:17.308 11:20:02 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:17.308 11:20:02 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:17.308 11:20:02 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:17.308 11:20:02 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:17.308 11:20:02 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:17.308 11:20:02 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:17.308 11:20:02 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:17.308 11:20:02 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:17.308 11:20:02 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:17.308 11:20:02 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:17.308 11:20:02 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:17.308 11:20:02 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:17.308 11:20:02 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:17.308 11:20:02 alias_rpc -- scripts/common.sh@345 -- # : 1 00:07:17.308 11:20:02 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:17.308 11:20:02 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:17.308 11:20:02 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:07:17.308 11:20:02 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:07:17.308 11:20:02 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:17.308 11:20:02 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:07:17.308 11:20:02 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:17.308 11:20:02 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:07:17.308 11:20:02 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:07:17.308 11:20:02 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:17.308 11:20:02 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:07:17.308 11:20:02 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:17.308 11:20:02 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:17.308 11:20:02 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:17.308 11:20:02 alias_rpc -- scripts/common.sh@368 -- # return 0 00:07:17.308 11:20:02 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:17.308 11:20:02 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:17.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.308 --rc genhtml_branch_coverage=1 00:07:17.308 --rc genhtml_function_coverage=1 00:07:17.308 --rc genhtml_legend=1 00:07:17.308 --rc geninfo_all_blocks=1 00:07:17.308 --rc geninfo_unexecuted_blocks=1 00:07:17.308 00:07:17.308 ' 00:07:17.308 11:20:02 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:17.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.308 --rc genhtml_branch_coverage=1 00:07:17.308 --rc genhtml_function_coverage=1 00:07:17.308 --rc genhtml_legend=1 00:07:17.308 --rc geninfo_all_blocks=1 00:07:17.308 --rc geninfo_unexecuted_blocks=1 00:07:17.308 00:07:17.308 ' 00:07:17.308 11:20:02 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:17.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.309 --rc genhtml_branch_coverage=1 00:07:17.309 --rc genhtml_function_coverage=1 00:07:17.309 --rc genhtml_legend=1 00:07:17.309 --rc geninfo_all_blocks=1 00:07:17.309 --rc geninfo_unexecuted_blocks=1 00:07:17.309 00:07:17.309 ' 00:07:17.309 11:20:02 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:17.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.309 --rc genhtml_branch_coverage=1 00:07:17.309 --rc genhtml_function_coverage=1 00:07:17.309 --rc genhtml_legend=1 00:07:17.309 --rc geninfo_all_blocks=1 00:07:17.309 --rc geninfo_unexecuted_blocks=1 00:07:17.309 00:07:17.309 ' 00:07:17.309 11:20:02 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:17.309 11:20:02 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59834 00:07:17.309 11:20:02 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:17.309 11:20:02 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59834 00:07:17.309 11:20:02 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 59834 ']' 00:07:17.309 11:20:02 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.309 11:20:02 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:17.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.309 11:20:02 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.309 11:20:02 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:17.309 11:20:02 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.309 [2024-11-20 11:20:02.789210] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:07:17.309 [2024-11-20 11:20:02.789322] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59834 ] 00:07:17.567 [2024-11-20 11:20:02.934097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.567 [2024-11-20 11:20:02.967502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.567 11:20:03 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:17.567 11:20:03 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:17.567 11:20:03 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:07:18.146 11:20:03 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59834 00:07:18.146 11:20:03 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 59834 ']' 00:07:18.146 11:20:03 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 59834 00:07:18.146 11:20:03 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:07:18.146 11:20:03 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:18.146 11:20:03 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59834 00:07:18.146 11:20:03 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:18.146 11:20:03 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:18.146 killing process with pid 59834 00:07:18.146 11:20:03 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59834' 00:07:18.146 11:20:03 alias_rpc -- common/autotest_common.sh@973 -- # kill 59834 00:07:18.146 11:20:03 alias_rpc -- common/autotest_common.sh@978 -- # wait 59834 00:07:18.431 00:07:18.431 real 0m1.201s 00:07:18.431 user 0m1.376s 00:07:18.431 sys 0m0.332s 00:07:18.431 11:20:03 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:18.431 11:20:03 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.431 ************************************ 00:07:18.431 END TEST alias_rpc 00:07:18.431 ************************************ 00:07:18.431 11:20:03 -- spdk/autotest.sh@163 -- # [[ 1 -eq 0 ]] 00:07:18.431 11:20:03 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:18.432 11:20:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:18.432 11:20:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.432 11:20:03 -- common/autotest_common.sh@10 -- # set +x 00:07:18.432 ************************************ 00:07:18.432 START TEST dpdk_mem_utility 00:07:18.432 ************************************ 00:07:18.432 11:20:03 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:18.432 * Looking for test storage... 00:07:18.432 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:07:18.432 11:20:03 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:18.432 11:20:03 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:18.432 11:20:03 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:07:18.432 11:20:03 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:18.432 11:20:03 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:18.432 11:20:03 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:18.432 11:20:03 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:18.432 11:20:03 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:07:18.432 11:20:03 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:07:18.432 11:20:03 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:07:18.432 11:20:03 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:07:18.432 11:20:03 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:07:18.432 11:20:03 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:07:18.432 11:20:03 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:07:18.432 11:20:03 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:18.432 11:20:03 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:07:18.432 11:20:03 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:07:18.432 11:20:03 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:18.432 11:20:03 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:18.432 11:20:03 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:07:18.432 11:20:03 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:07:18.432 11:20:03 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:18.432 11:20:03 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:07:18.432 11:20:03 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:07:18.432 11:20:03 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:07:18.432 11:20:03 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:07:18.432 11:20:03 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:18.432 11:20:03 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:07:18.432 11:20:03 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:07:18.432 11:20:03 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:18.432 11:20:03 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:18.432 11:20:03 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:07:18.432 11:20:03 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:18.432 11:20:03 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:18.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.432 --rc genhtml_branch_coverage=1 00:07:18.432 --rc genhtml_function_coverage=1 00:07:18.432 --rc genhtml_legend=1 00:07:18.432 --rc geninfo_all_blocks=1 00:07:18.432 --rc geninfo_unexecuted_blocks=1 00:07:18.432 00:07:18.432 ' 00:07:18.432 11:20:03 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:18.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.432 --rc genhtml_branch_coverage=1 00:07:18.432 --rc genhtml_function_coverage=1 00:07:18.432 --rc genhtml_legend=1 00:07:18.432 --rc geninfo_all_blocks=1 00:07:18.432 --rc geninfo_unexecuted_blocks=1 00:07:18.432 00:07:18.432 ' 00:07:18.432 11:20:03 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:18.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.432 --rc genhtml_branch_coverage=1 00:07:18.432 --rc genhtml_function_coverage=1 00:07:18.432 --rc genhtml_legend=1 00:07:18.432 --rc geninfo_all_blocks=1 00:07:18.432 --rc geninfo_unexecuted_blocks=1 00:07:18.432 00:07:18.432 ' 00:07:18.432 11:20:03 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:18.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.432 --rc genhtml_branch_coverage=1 00:07:18.432 --rc genhtml_function_coverage=1 00:07:18.432 --rc genhtml_legend=1 00:07:18.432 --rc geninfo_all_blocks=1 00:07:18.432 --rc geninfo_unexecuted_blocks=1 00:07:18.432 00:07:18.432 ' 00:07:18.432 11:20:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:18.432 11:20:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59915 00:07:18.432 11:20:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59915 00:07:18.432 11:20:03 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 59915 ']' 00:07:18.432 11:20:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:18.432 11:20:03 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.432 11:20:03 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:18.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.432 11:20:03 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.432 11:20:03 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:18.432 11:20:03 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:18.691 [2024-11-20 11:20:04.056052] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:07:18.691 [2024-11-20 11:20:04.056560] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59915 ] 00:07:18.691 [2024-11-20 11:20:04.199377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.691 [2024-11-20 11:20:04.236638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.949 11:20:04 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:18.949 11:20:04 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:07:18.949 11:20:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:18.949 11:20:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:18.949 11:20:04 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.949 11:20:04 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:18.949 { 00:07:18.949 "filename": "/tmp/spdk_mem_dump.txt" 00:07:18.949 } 00:07:18.949 11:20:04 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.949 11:20:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:18.949 DPDK memory size 810.000000 MiB in 1 heap(s) 00:07:18.949 1 heaps totaling size 810.000000 MiB 00:07:18.949 size: 810.000000 MiB heap id: 0 00:07:18.949 end heaps---------- 00:07:18.949 9 mempools totaling size 595.772034 MiB 00:07:18.949 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:18.949 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:18.949 size: 92.545471 MiB name: bdev_io_59915 00:07:18.949 size: 50.003479 MiB name: msgpool_59915 00:07:18.949 size: 36.509338 MiB name: fsdev_io_59915 00:07:18.949 size: 21.763794 MiB name: PDU_Pool 00:07:18.949 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:18.949 size: 4.133484 MiB name: evtpool_59915 00:07:18.949 size: 0.026123 MiB name: Session_Pool 00:07:18.949 end mempools------- 00:07:18.949 6 memzones totaling size 4.142822 MiB 00:07:18.949 size: 1.000366 MiB name: RG_ring_0_59915 00:07:18.949 size: 1.000366 MiB name: RG_ring_1_59915 00:07:18.949 size: 1.000366 MiB name: RG_ring_4_59915 00:07:18.949 size: 1.000366 MiB name: RG_ring_5_59915 00:07:18.949 size: 0.125366 MiB name: RG_ring_2_59915 00:07:18.949 size: 0.015991 MiB name: RG_ring_3_59915 00:07:18.949 end memzones------- 00:07:18.949 11:20:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:07:19.209 heap id: 0 total size: 810.000000 MiB number of busy elements: 231 number of free elements: 15 00:07:19.209 list of free elements. size: 10.828247 MiB 00:07:19.209 element at address: 0x200018a00000 with size: 0.999878 MiB 00:07:19.209 element at address: 0x200018c00000 with size: 0.999878 MiB 00:07:19.209 element at address: 0x200000400000 with size: 0.996338 MiB 00:07:19.209 element at address: 0x200031800000 with size: 0.994446 MiB 00:07:19.209 element at address: 0x200006400000 with size: 0.959839 MiB 00:07:19.209 element at address: 0x200012c00000 with size: 0.954285 MiB 00:07:19.209 element at address: 0x200018e00000 with size: 0.936584 MiB 00:07:19.209 element at address: 0x200000200000 with size: 0.717346 MiB 00:07:19.209 element at address: 0x20001a600000 with size: 0.570801 MiB 00:07:19.209 element at address: 0x200000c00000 with size: 0.491028 MiB 00:07:19.209 element at address: 0x20000a600000 with size: 0.489441 MiB 00:07:19.209 element at address: 0x200019000000 with size: 0.485657 MiB 00:07:19.209 element at address: 0x200003e00000 with size: 0.481018 MiB 00:07:19.209 element at address: 0x200027a00000 with size: 0.398315 MiB 00:07:19.209 element at address: 0x200000800000 with size: 0.353394 MiB 00:07:19.209 list of standard malloc elements. size: 199.252869 MiB 00:07:19.209 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:07:19.209 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:07:19.209 element at address: 0x200018afff80 with size: 1.000122 MiB 00:07:19.209 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:07:19.209 element at address: 0x200018efff80 with size: 1.000122 MiB 00:07:19.209 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:07:19.209 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:07:19.209 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:07:19.209 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:07:19.209 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:07:19.209 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:07:19.209 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:07:19.209 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:07:19.209 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:07:19.209 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:07:19.209 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:07:19.209 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:07:19.209 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:07:19.209 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:07:19.209 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:07:19.209 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:07:19.209 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:07:19.209 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:07:19.209 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:07:19.209 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:07:19.209 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:07:19.209 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:07:19.209 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:07:19.209 element at address: 0x20000085a780 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20000085a980 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20000085ec40 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20000087f080 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20000087f140 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20000087f200 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20000087f380 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20000087f440 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20000087f500 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20000087f680 with size: 0.000183 MiB 00:07:19.210 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:07:19.210 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:07:19.210 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:07:19.210 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:07:19.210 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:07:19.210 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:07:19.210 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:07:19.210 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:07:19.210 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:07:19.210 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:07:19.210 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:07:19.210 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:07:19.210 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:07:19.210 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:07:19.210 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:07:19.210 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:07:19.210 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:07:19.210 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:07:19.210 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:07:19.210 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:07:19.210 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:07:19.210 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:07:19.210 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:07:19.210 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:07:19.210 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:07:19.210 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:07:19.210 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:07:19.210 element at address: 0x200000cff000 with size: 0.000183 MiB 00:07:19.210 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:07:19.210 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:07:19.210 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:07:19.210 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:07:19.210 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:07:19.210 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:07:19.210 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:07:19.210 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:07:19.210 element at address: 0x200003efb980 with size: 0.000183 MiB 00:07:19.210 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:07:19.210 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:07:19.210 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:07:19.210 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:07:19.210 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20001a692200 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20001a6922c0 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20001a692380 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20001a692440 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20001a692500 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20001a6925c0 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20001a692680 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20001a692740 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20001a692800 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20001a6928c0 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20001a692980 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20001a692a40 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20001a692b00 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20001a692bc0 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20001a692c80 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20001a692d40 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20001a692e00 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20001a692ec0 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20001a692f80 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20001a693040 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20001a693100 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20001a6931c0 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20001a693280 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20001a693340 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20001a693400 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20001a6934c0 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20001a693580 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20001a693640 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20001a693700 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20001a6937c0 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20001a693880 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20001a693940 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20001a693a00 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20001a693ac0 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20001a693b80 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20001a693c40 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20001a693d00 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20001a693dc0 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20001a693e80 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20001a693f40 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20001a694000 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20001a6940c0 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20001a694180 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20001a694240 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20001a694300 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20001a6943c0 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20001a694480 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20001a694540 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20001a694600 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20001a6946c0 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20001a694780 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20001a694840 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20001a694900 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20001a6949c0 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20001a694a80 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20001a694b40 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20001a694c00 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20001a694cc0 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20001a694d80 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20001a694e40 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20001a694f00 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20001a694fc0 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20001a695080 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20001a695140 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20001a695200 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20001a6952c0 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20001a695380 with size: 0.000183 MiB 00:07:19.210 element at address: 0x20001a695440 with size: 0.000183 MiB 00:07:19.210 element at address: 0x200027a65f80 with size: 0.000183 MiB 00:07:19.210 element at address: 0x200027a66040 with size: 0.000183 MiB 00:07:19.210 element at address: 0x200027a6cc40 with size: 0.000183 MiB 00:07:19.210 element at address: 0x200027a6ce40 with size: 0.000183 MiB 00:07:19.210 element at address: 0x200027a6cf00 with size: 0.000183 MiB 00:07:19.210 element at address: 0x200027a6cfc0 with size: 0.000183 MiB 00:07:19.210 element at address: 0x200027a6d080 with size: 0.000183 MiB 00:07:19.210 element at address: 0x200027a6d140 with size: 0.000183 MiB 00:07:19.210 element at address: 0x200027a6d200 with size: 0.000183 MiB 00:07:19.210 element at address: 0x200027a6d2c0 with size: 0.000183 MiB 00:07:19.210 element at address: 0x200027a6d380 with size: 0.000183 MiB 00:07:19.210 element at address: 0x200027a6d440 with size: 0.000183 MiB 00:07:19.210 element at address: 0x200027a6d500 with size: 0.000183 MiB 00:07:19.210 element at address: 0x200027a6d5c0 with size: 0.000183 MiB 00:07:19.210 element at address: 0x200027a6d680 with size: 0.000183 MiB 00:07:19.210 element at address: 0x200027a6d740 with size: 0.000183 MiB 00:07:19.210 element at address: 0x200027a6d800 with size: 0.000183 MiB 00:07:19.210 element at address: 0x200027a6d8c0 with size: 0.000183 MiB 00:07:19.210 element at address: 0x200027a6d980 with size: 0.000183 MiB 00:07:19.210 element at address: 0x200027a6da40 with size: 0.000183 MiB 00:07:19.211 element at address: 0x200027a6db00 with size: 0.000183 MiB 00:07:19.211 element at address: 0x200027a6dbc0 with size: 0.000183 MiB 00:07:19.211 element at address: 0x200027a6dc80 with size: 0.000183 MiB 00:07:19.211 element at address: 0x200027a6dd40 with size: 0.000183 MiB 00:07:19.211 element at address: 0x200027a6de00 with size: 0.000183 MiB 00:07:19.211 element at address: 0x200027a6dec0 with size: 0.000183 MiB 00:07:19.211 element at address: 0x200027a6df80 with size: 0.000183 MiB 00:07:19.211 element at address: 0x200027a6e040 with size: 0.000183 MiB 00:07:19.211 element at address: 0x200027a6e100 with size: 0.000183 MiB 00:07:19.211 element at address: 0x200027a6e1c0 with size: 0.000183 MiB 00:07:19.211 element at address: 0x200027a6e280 with size: 0.000183 MiB 00:07:19.211 element at address: 0x200027a6e340 with size: 0.000183 MiB 00:07:19.211 element at address: 0x200027a6e400 with size: 0.000183 MiB 00:07:19.211 element at address: 0x200027a6e4c0 with size: 0.000183 MiB 00:07:19.211 element at address: 0x200027a6e580 with size: 0.000183 MiB 00:07:19.211 element at address: 0x200027a6e640 with size: 0.000183 MiB 00:07:19.211 element at address: 0x200027a6e700 with size: 0.000183 MiB 00:07:19.211 element at address: 0x200027a6e7c0 with size: 0.000183 MiB 00:07:19.211 element at address: 0x200027a6e880 with size: 0.000183 MiB 00:07:19.211 element at address: 0x200027a6e940 with size: 0.000183 MiB 00:07:19.211 element at address: 0x200027a6ea00 with size: 0.000183 MiB 00:07:19.211 element at address: 0x200027a6eac0 with size: 0.000183 MiB 00:07:19.211 element at address: 0x200027a6eb80 with size: 0.000183 MiB 00:07:19.211 element at address: 0x200027a6ec40 with size: 0.000183 MiB 00:07:19.211 element at address: 0x200027a6ed00 with size: 0.000183 MiB 00:07:19.211 element at address: 0x200027a6edc0 with size: 0.000183 MiB 00:07:19.211 element at address: 0x200027a6ee80 with size: 0.000183 MiB 00:07:19.211 element at address: 0x200027a6ef40 with size: 0.000183 MiB 00:07:19.211 element at address: 0x200027a6f000 with size: 0.000183 MiB 00:07:19.211 element at address: 0x200027a6f0c0 with size: 0.000183 MiB 00:07:19.211 element at address: 0x200027a6f180 with size: 0.000183 MiB 00:07:19.211 element at address: 0x200027a6f240 with size: 0.000183 MiB 00:07:19.211 element at address: 0x200027a6f300 with size: 0.000183 MiB 00:07:19.211 element at address: 0x200027a6f3c0 with size: 0.000183 MiB 00:07:19.211 element at address: 0x200027a6f480 with size: 0.000183 MiB 00:07:19.211 element at address: 0x200027a6f540 with size: 0.000183 MiB 00:07:19.211 element at address: 0x200027a6f600 with size: 0.000183 MiB 00:07:19.211 element at address: 0x200027a6f6c0 with size: 0.000183 MiB 00:07:19.211 element at address: 0x200027a6f780 with size: 0.000183 MiB 00:07:19.211 element at address: 0x200027a6f840 with size: 0.000183 MiB 00:07:19.211 element at address: 0x200027a6f900 with size: 0.000183 MiB 00:07:19.211 element at address: 0x200027a6f9c0 with size: 0.000183 MiB 00:07:19.211 element at address: 0x200027a6fa80 with size: 0.000183 MiB 00:07:19.211 element at address: 0x200027a6fb40 with size: 0.000183 MiB 00:07:19.211 element at address: 0x200027a6fc00 with size: 0.000183 MiB 00:07:19.211 element at address: 0x200027a6fcc0 with size: 0.000183 MiB 00:07:19.211 element at address: 0x200027a6fd80 with size: 0.000183 MiB 00:07:19.211 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:07:19.211 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:07:19.211 list of memzone associated elements. size: 599.918884 MiB 00:07:19.211 element at address: 0x20001a695500 with size: 211.416748 MiB 00:07:19.211 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:19.211 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:07:19.211 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:19.211 element at address: 0x200012df4780 with size: 92.045044 MiB 00:07:19.211 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_59915_0 00:07:19.211 element at address: 0x200000dff380 with size: 48.003052 MiB 00:07:19.211 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59915_0 00:07:19.211 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:07:19.211 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_59915_0 00:07:19.211 element at address: 0x2000191be940 with size: 20.255554 MiB 00:07:19.211 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:19.211 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:07:19.211 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:19.211 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:07:19.211 associated memzone info: size: 3.000122 MiB name: MP_evtpool_59915_0 00:07:19.211 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:07:19.211 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59915 00:07:19.211 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:07:19.211 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59915 00:07:19.211 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:07:19.211 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:19.211 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:07:19.211 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:19.211 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:07:19.211 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:19.211 element at address: 0x200003efba40 with size: 1.008118 MiB 00:07:19.211 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:19.211 element at address: 0x200000cff180 with size: 1.000488 MiB 00:07:19.211 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59915 00:07:19.211 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:07:19.211 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59915 00:07:19.211 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:07:19.211 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59915 00:07:19.211 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:07:19.211 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59915 00:07:19.211 element at address: 0x20000087f740 with size: 0.500488 MiB 00:07:19.211 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_59915 00:07:19.211 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:07:19.211 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59915 00:07:19.211 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:07:19.211 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:19.211 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:07:19.211 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:19.211 element at address: 0x20001907c540 with size: 0.250488 MiB 00:07:19.211 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:19.211 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:07:19.211 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_59915 00:07:19.211 element at address: 0x20000085ed00 with size: 0.125488 MiB 00:07:19.211 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59915 00:07:19.211 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:07:19.211 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:19.211 element at address: 0x200027a66100 with size: 0.023743 MiB 00:07:19.211 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:19.211 element at address: 0x20000085aa40 with size: 0.016113 MiB 00:07:19.211 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59915 00:07:19.211 element at address: 0x200027a6c240 with size: 0.002441 MiB 00:07:19.211 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:19.211 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:07:19.211 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59915 00:07:19.211 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:07:19.211 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_59915 00:07:19.211 element at address: 0x20000085a840 with size: 0.000305 MiB 00:07:19.211 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59915 00:07:19.211 element at address: 0x200027a6cd00 with size: 0.000305 MiB 00:07:19.211 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:19.211 11:20:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:19.211 11:20:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59915 00:07:19.211 11:20:04 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 59915 ']' 00:07:19.211 11:20:04 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 59915 00:07:19.211 11:20:04 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:07:19.211 11:20:04 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:19.211 11:20:04 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59915 00:07:19.211 11:20:04 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:19.211 killing process with pid 59915 00:07:19.211 11:20:04 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:19.211 11:20:04 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59915' 00:07:19.211 11:20:04 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 59915 00:07:19.211 11:20:04 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 59915 00:07:19.470 00:07:19.470 real 0m1.082s 00:07:19.470 user 0m1.155s 00:07:19.470 sys 0m0.303s 00:07:19.470 11:20:04 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:19.470 11:20:04 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:19.470 ************************************ 00:07:19.470 END TEST dpdk_mem_utility 00:07:19.470 ************************************ 00:07:19.470 11:20:04 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:19.470 11:20:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:19.470 11:20:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:19.470 11:20:04 -- common/autotest_common.sh@10 -- # set +x 00:07:19.470 ************************************ 00:07:19.470 START TEST event 00:07:19.470 ************************************ 00:07:19.470 11:20:04 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:19.471 * Looking for test storage... 00:07:19.471 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:19.471 11:20:04 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:19.471 11:20:04 event -- common/autotest_common.sh@1693 -- # lcov --version 00:07:19.471 11:20:04 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:19.730 11:20:05 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:19.730 11:20:05 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:19.730 11:20:05 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:19.730 11:20:05 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:19.730 11:20:05 event -- scripts/common.sh@336 -- # IFS=.-: 00:07:19.730 11:20:05 event -- scripts/common.sh@336 -- # read -ra ver1 00:07:19.730 11:20:05 event -- scripts/common.sh@337 -- # IFS=.-: 00:07:19.730 11:20:05 event -- scripts/common.sh@337 -- # read -ra ver2 00:07:19.730 11:20:05 event -- scripts/common.sh@338 -- # local 'op=<' 00:07:19.730 11:20:05 event -- scripts/common.sh@340 -- # ver1_l=2 00:07:19.730 11:20:05 event -- scripts/common.sh@341 -- # ver2_l=1 00:07:19.730 11:20:05 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:19.730 11:20:05 event -- scripts/common.sh@344 -- # case "$op" in 00:07:19.730 11:20:05 event -- scripts/common.sh@345 -- # : 1 00:07:19.730 11:20:05 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:19.730 11:20:05 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:19.730 11:20:05 event -- scripts/common.sh@365 -- # decimal 1 00:07:19.730 11:20:05 event -- scripts/common.sh@353 -- # local d=1 00:07:19.730 11:20:05 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:19.730 11:20:05 event -- scripts/common.sh@355 -- # echo 1 00:07:19.730 11:20:05 event -- scripts/common.sh@365 -- # ver1[v]=1 00:07:19.730 11:20:05 event -- scripts/common.sh@366 -- # decimal 2 00:07:19.730 11:20:05 event -- scripts/common.sh@353 -- # local d=2 00:07:19.730 11:20:05 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:19.730 11:20:05 event -- scripts/common.sh@355 -- # echo 2 00:07:19.730 11:20:05 event -- scripts/common.sh@366 -- # ver2[v]=2 00:07:19.730 11:20:05 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:19.730 11:20:05 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:19.730 11:20:05 event -- scripts/common.sh@368 -- # return 0 00:07:19.730 11:20:05 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:19.730 11:20:05 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:19.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.730 --rc genhtml_branch_coverage=1 00:07:19.730 --rc genhtml_function_coverage=1 00:07:19.730 --rc genhtml_legend=1 00:07:19.730 --rc geninfo_all_blocks=1 00:07:19.730 --rc geninfo_unexecuted_blocks=1 00:07:19.730 00:07:19.730 ' 00:07:19.730 11:20:05 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:19.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.730 --rc genhtml_branch_coverage=1 00:07:19.730 --rc genhtml_function_coverage=1 00:07:19.730 --rc genhtml_legend=1 00:07:19.730 --rc geninfo_all_blocks=1 00:07:19.730 --rc geninfo_unexecuted_blocks=1 00:07:19.730 00:07:19.730 ' 00:07:19.730 11:20:05 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:19.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.730 --rc genhtml_branch_coverage=1 00:07:19.730 --rc genhtml_function_coverage=1 00:07:19.730 --rc genhtml_legend=1 00:07:19.730 --rc geninfo_all_blocks=1 00:07:19.730 --rc geninfo_unexecuted_blocks=1 00:07:19.730 00:07:19.730 ' 00:07:19.730 11:20:05 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:19.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.730 --rc genhtml_branch_coverage=1 00:07:19.730 --rc genhtml_function_coverage=1 00:07:19.730 --rc genhtml_legend=1 00:07:19.730 --rc geninfo_all_blocks=1 00:07:19.730 --rc geninfo_unexecuted_blocks=1 00:07:19.730 00:07:19.730 ' 00:07:19.730 11:20:05 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:19.730 11:20:05 event -- bdev/nbd_common.sh@6 -- # set -e 00:07:19.730 11:20:05 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:19.730 11:20:05 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:07:19.730 11:20:05 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:19.730 11:20:05 event -- common/autotest_common.sh@10 -- # set +x 00:07:19.730 ************************************ 00:07:19.730 START TEST event_perf 00:07:19.730 ************************************ 00:07:19.730 11:20:05 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:19.730 Running I/O for 1 seconds...[2024-11-20 11:20:05.141664] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:07:19.730 [2024-11-20 11:20:05.141816] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59999 ] 00:07:19.730 [2024-11-20 11:20:05.295082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:19.990 [2024-11-20 11:20:05.346803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:19.990 [2024-11-20 11:20:05.346879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:19.990 [2024-11-20 11:20:05.346987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.990 [2024-11-20 11:20:05.346982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:20.925 Running I/O for 1 seconds... 00:07:20.925 lcore 0: 178793 00:07:20.925 lcore 1: 178793 00:07:20.925 lcore 2: 178790 00:07:20.925 lcore 3: 178791 00:07:20.925 done. 00:07:20.925 00:07:20.925 real 0m1.269s 00:07:20.925 user 0m4.078s 00:07:20.925 sys 0m0.048s 00:07:20.925 11:20:06 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:20.925 11:20:06 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:20.925 ************************************ 00:07:20.925 END TEST event_perf 00:07:20.925 ************************************ 00:07:20.925 11:20:06 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:20.925 11:20:06 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:20.925 11:20:06 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:20.925 11:20:06 event -- common/autotest_common.sh@10 -- # set +x 00:07:20.925 ************************************ 00:07:20.925 START TEST event_reactor 00:07:20.925 ************************************ 00:07:20.925 11:20:06 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:20.925 [2024-11-20 11:20:06.452971] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:07:20.925 [2024-11-20 11:20:06.453060] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60042 ] 00:07:21.183 [2024-11-20 11:20:06.595701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.183 [2024-11-20 11:20:06.631061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.118 test_start 00:07:22.118 oneshot 00:07:22.118 tick 100 00:07:22.118 tick 100 00:07:22.118 tick 250 00:07:22.118 tick 100 00:07:22.118 tick 100 00:07:22.118 tick 250 00:07:22.118 tick 100 00:07:22.118 tick 500 00:07:22.118 tick 100 00:07:22.118 tick 100 00:07:22.118 tick 250 00:07:22.118 tick 100 00:07:22.118 tick 100 00:07:22.118 test_end 00:07:22.118 00:07:22.118 real 0m1.240s 00:07:22.118 user 0m1.097s 00:07:22.118 sys 0m0.034s 00:07:22.118 11:20:07 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:22.118 11:20:07 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:22.118 ************************************ 00:07:22.118 END TEST event_reactor 00:07:22.118 ************************************ 00:07:22.377 11:20:07 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:22.377 11:20:07 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:22.377 11:20:07 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:22.377 11:20:07 event -- common/autotest_common.sh@10 -- # set +x 00:07:22.377 ************************************ 00:07:22.377 START TEST event_reactor_perf 00:07:22.377 ************************************ 00:07:22.377 11:20:07 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:22.377 [2024-11-20 11:20:07.739331] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:07:22.377 [2024-11-20 11:20:07.739478] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60073 ] 00:07:22.377 [2024-11-20 11:20:07.881478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.377 [2024-11-20 11:20:07.931665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.760 test_start 00:07:23.760 test_end 00:07:23.760 Performance: 353598 events per second 00:07:23.760 00:07:23.760 real 0m1.260s 00:07:23.760 user 0m1.111s 00:07:23.760 sys 0m0.040s 00:07:23.760 11:20:08 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.760 11:20:08 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:23.760 ************************************ 00:07:23.760 END TEST event_reactor_perf 00:07:23.760 ************************************ 00:07:23.760 11:20:09 event -- event/event.sh@49 -- # uname -s 00:07:23.760 11:20:09 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:23.760 11:20:09 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:23.760 11:20:09 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:23.760 11:20:09 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.760 11:20:09 event -- common/autotest_common.sh@10 -- # set +x 00:07:23.760 ************************************ 00:07:23.760 START TEST event_scheduler 00:07:23.760 ************************************ 00:07:23.760 11:20:09 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:23.760 * Looking for test storage... 00:07:23.760 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:07:23.760 11:20:09 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:23.760 11:20:09 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:07:23.760 11:20:09 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:23.760 11:20:09 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:23.760 11:20:09 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:23.760 11:20:09 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:23.760 11:20:09 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:23.760 11:20:09 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:07:23.760 11:20:09 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:07:23.760 11:20:09 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:07:23.760 11:20:09 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:07:23.760 11:20:09 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:07:23.760 11:20:09 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:07:23.760 11:20:09 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:07:23.760 11:20:09 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:23.760 11:20:09 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:07:23.760 11:20:09 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:07:23.760 11:20:09 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:23.760 11:20:09 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:23.760 11:20:09 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:07:23.760 11:20:09 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:07:23.760 11:20:09 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:23.760 11:20:09 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:07:23.760 11:20:09 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:07:23.760 11:20:09 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:07:23.760 11:20:09 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:07:23.760 11:20:09 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:23.760 11:20:09 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:07:23.760 11:20:09 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:07:23.760 11:20:09 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:23.760 11:20:09 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:23.760 11:20:09 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:07:23.760 11:20:09 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:23.760 11:20:09 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:23.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.760 --rc genhtml_branch_coverage=1 00:07:23.760 --rc genhtml_function_coverage=1 00:07:23.760 --rc genhtml_legend=1 00:07:23.760 --rc geninfo_all_blocks=1 00:07:23.760 --rc geninfo_unexecuted_blocks=1 00:07:23.760 00:07:23.760 ' 00:07:23.760 11:20:09 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:23.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.760 --rc genhtml_branch_coverage=1 00:07:23.760 --rc genhtml_function_coverage=1 00:07:23.760 --rc genhtml_legend=1 00:07:23.760 --rc geninfo_all_blocks=1 00:07:23.760 --rc geninfo_unexecuted_blocks=1 00:07:23.760 00:07:23.760 ' 00:07:23.760 11:20:09 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:23.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.760 --rc genhtml_branch_coverage=1 00:07:23.760 --rc genhtml_function_coverage=1 00:07:23.760 --rc genhtml_legend=1 00:07:23.760 --rc geninfo_all_blocks=1 00:07:23.760 --rc geninfo_unexecuted_blocks=1 00:07:23.760 00:07:23.760 ' 00:07:23.760 11:20:09 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:23.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.760 --rc genhtml_branch_coverage=1 00:07:23.760 --rc genhtml_function_coverage=1 00:07:23.760 --rc genhtml_legend=1 00:07:23.760 --rc geninfo_all_blocks=1 00:07:23.760 --rc geninfo_unexecuted_blocks=1 00:07:23.760 00:07:23.760 ' 00:07:23.760 11:20:09 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:23.760 11:20:09 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=60137 00:07:23.760 11:20:09 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:23.760 11:20:09 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:23.760 11:20:09 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 60137 00:07:23.760 11:20:09 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 60137 ']' 00:07:23.760 11:20:09 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.760 11:20:09 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:23.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.760 11:20:09 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.760 11:20:09 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:23.760 11:20:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:23.760 [2024-11-20 11:20:09.274890] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:07:23.760 [2024-11-20 11:20:09.275001] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60137 ] 00:07:24.018 [2024-11-20 11:20:09.418287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:24.018 [2024-11-20 11:20:09.457196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.018 [2024-11-20 11:20:09.457328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:24.018 [2024-11-20 11:20:09.457400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:24.018 [2024-11-20 11:20:09.457404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:24.276 11:20:09 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:24.276 11:20:09 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:07:24.276 11:20:09 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:24.276 11:20:09 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.276 11:20:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:24.276 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:24.276 POWER: Cannot set governor of lcore 0 to userspace 00:07:24.276 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:24.276 POWER: Cannot set governor of lcore 0 to performance 00:07:24.276 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:24.276 POWER: Cannot set governor of lcore 0 to userspace 00:07:24.276 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:24.276 POWER: Cannot set governor of lcore 0 to userspace 00:07:24.276 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:07:24.276 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:07:24.276 POWER: Unable to set Power Management Environment for lcore 0 00:07:24.276 [2024-11-20 11:20:09.735943] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:07:24.276 [2024-11-20 11:20:09.735967] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:07:24.276 [2024-11-20 11:20:09.735983] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:07:24.276 [2024-11-20 11:20:09.736001] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:24.276 [2024-11-20 11:20:09.736012] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:24.276 [2024-11-20 11:20:09.736022] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:24.276 11:20:09 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.276 11:20:09 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:24.276 11:20:09 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.276 11:20:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:24.276 [2024-11-20 11:20:09.801680] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:24.276 11:20:09 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.276 11:20:09 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:24.276 11:20:09 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:24.276 11:20:09 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:24.276 11:20:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:24.276 ************************************ 00:07:24.276 START TEST scheduler_create_thread 00:07:24.276 ************************************ 00:07:24.276 11:20:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:07:24.276 11:20:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:24.276 11:20:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.276 11:20:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:24.276 2 00:07:24.276 11:20:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.276 11:20:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:24.276 11:20:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.276 11:20:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:24.276 3 00:07:24.276 11:20:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.276 11:20:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:24.276 11:20:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.276 11:20:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:24.276 4 00:07:24.276 11:20:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.276 11:20:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:24.276 11:20:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.276 11:20:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:24.276 5 00:07:24.276 11:20:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.276 11:20:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:24.276 11:20:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.276 11:20:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:24.276 6 00:07:24.276 11:20:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.276 11:20:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:24.276 11:20:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.277 11:20:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:24.535 7 00:07:24.535 11:20:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.535 11:20:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:24.535 11:20:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.535 11:20:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:24.535 8 00:07:24.535 11:20:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.535 11:20:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:24.535 11:20:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.535 11:20:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:24.535 9 00:07:24.535 11:20:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.535 11:20:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:24.535 11:20:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.535 11:20:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:24.535 10 00:07:24.535 11:20:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.535 11:20:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:24.535 11:20:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.535 11:20:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:24.535 11:20:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.535 11:20:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:24.535 11:20:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:24.535 11:20:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.535 11:20:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:24.535 11:20:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.535 11:20:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:24.535 11:20:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.535 11:20:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:25.911 11:20:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.911 11:20:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:25.911 11:20:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:25.911 11:20:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.911 11:20:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:26.847 11:20:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.847 00:07:26.847 real 0m2.615s 00:07:26.847 user 0m0.017s 00:07:26.847 sys 0m0.005s 00:07:26.847 11:20:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:26.847 11:20:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:26.847 ************************************ 00:07:26.847 END TEST scheduler_create_thread 00:07:26.847 ************************************ 00:07:27.105 11:20:12 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:27.105 11:20:12 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 60137 00:07:27.105 11:20:12 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 60137 ']' 00:07:27.105 11:20:12 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 60137 00:07:27.105 11:20:12 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:07:27.106 11:20:12 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:27.106 11:20:12 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60137 00:07:27.106 11:20:12 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:27.106 11:20:12 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:27.106 11:20:12 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60137' 00:07:27.106 killing process with pid 60137 00:07:27.106 11:20:12 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 60137 00:07:27.106 11:20:12 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 60137 00:07:27.363 [2024-11-20 11:20:12.905795] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:27.622 00:07:27.622 real 0m4.040s 00:07:27.622 user 0m6.640s 00:07:27.622 sys 0m0.316s 00:07:27.622 11:20:13 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.622 11:20:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:27.622 ************************************ 00:07:27.622 END TEST event_scheduler 00:07:27.622 ************************************ 00:07:27.622 11:20:13 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:27.622 11:20:13 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:27.622 11:20:13 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:27.622 11:20:13 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.622 11:20:13 event -- common/autotest_common.sh@10 -- # set +x 00:07:27.622 ************************************ 00:07:27.622 START TEST app_repeat 00:07:27.622 ************************************ 00:07:27.622 11:20:13 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:07:27.622 11:20:13 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:27.622 11:20:13 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:27.622 11:20:13 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:27.622 11:20:13 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:27.622 11:20:13 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:27.622 11:20:13 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:27.622 11:20:13 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:27.622 11:20:13 event.app_repeat -- event/event.sh@19 -- # repeat_pid=60241 00:07:27.622 11:20:13 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:27.622 Process app_repeat pid: 60241 00:07:27.622 11:20:13 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 60241' 00:07:27.622 11:20:13 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:27.622 11:20:13 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:27.622 spdk_app_start Round 0 00:07:27.622 11:20:13 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:27.622 11:20:13 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60241 /var/tmp/spdk-nbd.sock 00:07:27.622 11:20:13 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60241 ']' 00:07:27.622 11:20:13 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:27.622 11:20:13 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:27.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:27.622 11:20:13 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:27.622 11:20:13 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:27.622 11:20:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:27.622 [2024-11-20 11:20:13.139412] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:07:27.622 [2024-11-20 11:20:13.139514] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60241 ] 00:07:27.939 [2024-11-20 11:20:13.279579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:27.939 [2024-11-20 11:20:13.316706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:27.939 [2024-11-20 11:20:13.316725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.229 11:20:13 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:28.229 11:20:13 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:28.229 11:20:13 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:28.486 Malloc0 00:07:28.486 11:20:13 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:29.050 Malloc1 00:07:29.050 11:20:14 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:29.050 11:20:14 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:29.050 11:20:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:29.050 11:20:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:29.050 11:20:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:29.050 11:20:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:29.050 11:20:14 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:29.050 11:20:14 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:29.050 11:20:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:29.050 11:20:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:29.050 11:20:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:29.050 11:20:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:29.050 11:20:14 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:29.050 11:20:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:29.050 11:20:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:29.050 11:20:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:29.307 /dev/nbd0 00:07:29.564 11:20:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:29.564 11:20:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:29.564 11:20:14 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:29.564 11:20:14 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:29.564 11:20:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:29.564 11:20:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:29.564 11:20:14 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:29.564 11:20:14 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:29.564 11:20:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:29.564 11:20:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:29.564 11:20:14 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:29.564 1+0 records in 00:07:29.564 1+0 records out 00:07:29.564 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000289718 s, 14.1 MB/s 00:07:29.564 11:20:14 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:29.564 11:20:14 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:29.564 11:20:14 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:29.564 11:20:14 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:29.564 11:20:14 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:29.564 11:20:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:29.564 11:20:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:29.564 11:20:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:29.821 /dev/nbd1 00:07:29.821 11:20:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:29.821 11:20:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:29.821 11:20:15 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:29.821 11:20:15 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:29.821 11:20:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:29.821 11:20:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:29.821 11:20:15 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:29.821 11:20:15 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:29.821 11:20:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:29.821 11:20:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:29.821 11:20:15 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:29.821 1+0 records in 00:07:29.821 1+0 records out 00:07:29.821 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000344313 s, 11.9 MB/s 00:07:29.821 11:20:15 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:29.821 11:20:15 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:29.821 11:20:15 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:29.821 11:20:15 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:29.821 11:20:15 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:29.821 11:20:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:29.821 11:20:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:29.821 11:20:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:29.821 11:20:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:29.821 11:20:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:30.385 11:20:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:30.385 { 00:07:30.385 "bdev_name": "Malloc0", 00:07:30.385 "nbd_device": "/dev/nbd0" 00:07:30.385 }, 00:07:30.385 { 00:07:30.385 "bdev_name": "Malloc1", 00:07:30.385 "nbd_device": "/dev/nbd1" 00:07:30.385 } 00:07:30.385 ]' 00:07:30.385 11:20:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:30.385 11:20:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:30.385 { 00:07:30.385 "bdev_name": "Malloc0", 00:07:30.385 "nbd_device": "/dev/nbd0" 00:07:30.385 }, 00:07:30.385 { 00:07:30.385 "bdev_name": "Malloc1", 00:07:30.385 "nbd_device": "/dev/nbd1" 00:07:30.385 } 00:07:30.385 ]' 00:07:30.385 11:20:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:30.385 /dev/nbd1' 00:07:30.385 11:20:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:30.385 /dev/nbd1' 00:07:30.385 11:20:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:30.385 11:20:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:30.385 11:20:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:30.385 11:20:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:30.385 11:20:15 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:30.385 11:20:15 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:30.385 11:20:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:30.385 11:20:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:30.385 11:20:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:30.385 11:20:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:30.385 11:20:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:30.385 11:20:15 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:30.385 256+0 records in 00:07:30.385 256+0 records out 00:07:30.385 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00757258 s, 138 MB/s 00:07:30.385 11:20:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:30.385 11:20:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:30.385 256+0 records in 00:07:30.385 256+0 records out 00:07:30.385 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0323683 s, 32.4 MB/s 00:07:30.385 11:20:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:30.385 11:20:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:30.385 256+0 records in 00:07:30.385 256+0 records out 00:07:30.385 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0411801 s, 25.5 MB/s 00:07:30.385 11:20:15 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:30.385 11:20:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:30.385 11:20:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:30.385 11:20:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:30.385 11:20:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:30.385 11:20:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:30.385 11:20:15 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:30.385 11:20:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:30.385 11:20:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:30.385 11:20:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:30.385 11:20:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:30.385 11:20:15 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:30.385 11:20:15 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:30.385 11:20:15 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:30.385 11:20:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:30.385 11:20:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:30.385 11:20:15 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:30.385 11:20:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:30.385 11:20:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:30.951 11:20:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:30.951 11:20:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:30.951 11:20:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:30.951 11:20:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:30.951 11:20:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:30.951 11:20:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:30.951 11:20:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:30.951 11:20:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:30.951 11:20:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:30.951 11:20:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:31.209 11:20:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:31.209 11:20:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:31.209 11:20:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:31.209 11:20:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:31.210 11:20:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:31.210 11:20:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:31.210 11:20:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:31.210 11:20:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:31.210 11:20:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:31.210 11:20:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:31.210 11:20:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:31.775 11:20:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:31.775 11:20:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:31.775 11:20:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:31.775 11:20:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:31.775 11:20:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:31.775 11:20:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:31.775 11:20:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:31.775 11:20:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:31.775 11:20:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:31.775 11:20:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:31.775 11:20:17 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:31.775 11:20:17 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:31.775 11:20:17 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:32.034 11:20:17 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:32.034 [2024-11-20 11:20:17.536576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:32.034 [2024-11-20 11:20:17.573034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:32.034 [2024-11-20 11:20:17.573046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.034 [2024-11-20 11:20:17.605297] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:32.034 [2024-11-20 11:20:17.605579] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:35.313 11:20:20 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:35.313 spdk_app_start Round 1 00:07:35.313 11:20:20 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:35.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:35.313 11:20:20 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60241 /var/tmp/spdk-nbd.sock 00:07:35.313 11:20:20 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60241 ']' 00:07:35.313 11:20:20 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:35.313 11:20:20 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:35.313 11:20:20 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:35.313 11:20:20 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:35.313 11:20:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:35.313 11:20:20 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:35.313 11:20:20 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:35.313 11:20:20 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:35.573 Malloc0 00:07:35.573 11:20:21 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:35.831 Malloc1 00:07:35.831 11:20:21 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:35.831 11:20:21 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:35.831 11:20:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:35.831 11:20:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:35.831 11:20:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:35.831 11:20:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:35.831 11:20:21 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:35.831 11:20:21 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:35.831 11:20:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:35.831 11:20:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:35.831 11:20:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:35.831 11:20:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:35.831 11:20:21 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:35.831 11:20:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:35.831 11:20:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:35.831 11:20:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:36.398 /dev/nbd0 00:07:36.398 11:20:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:36.398 11:20:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:36.398 11:20:21 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:36.398 11:20:21 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:36.398 11:20:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:36.398 11:20:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:36.398 11:20:21 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:36.398 11:20:21 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:36.398 11:20:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:36.398 11:20:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:36.398 11:20:21 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:36.398 1+0 records in 00:07:36.398 1+0 records out 00:07:36.398 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000339217 s, 12.1 MB/s 00:07:36.398 11:20:21 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:36.398 11:20:21 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:36.398 11:20:21 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:36.398 11:20:21 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:36.398 11:20:21 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:36.398 11:20:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:36.398 11:20:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:36.398 11:20:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:36.656 /dev/nbd1 00:07:36.657 11:20:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:36.657 11:20:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:36.657 11:20:22 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:36.657 11:20:22 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:36.657 11:20:22 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:36.657 11:20:22 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:36.657 11:20:22 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:36.657 11:20:22 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:36.657 11:20:22 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:36.657 11:20:22 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:36.657 11:20:22 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:36.657 1+0 records in 00:07:36.657 1+0 records out 00:07:36.657 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000811015 s, 5.1 MB/s 00:07:36.657 11:20:22 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:36.657 11:20:22 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:36.657 11:20:22 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:36.657 11:20:22 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:36.657 11:20:22 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:36.657 11:20:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:36.657 11:20:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:36.657 11:20:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:36.657 11:20:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:36.657 11:20:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:37.223 11:20:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:37.223 { 00:07:37.223 "bdev_name": "Malloc0", 00:07:37.223 "nbd_device": "/dev/nbd0" 00:07:37.223 }, 00:07:37.223 { 00:07:37.223 "bdev_name": "Malloc1", 00:07:37.223 "nbd_device": "/dev/nbd1" 00:07:37.223 } 00:07:37.223 ]' 00:07:37.223 11:20:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:37.223 { 00:07:37.223 "bdev_name": "Malloc0", 00:07:37.223 "nbd_device": "/dev/nbd0" 00:07:37.223 }, 00:07:37.223 { 00:07:37.223 "bdev_name": "Malloc1", 00:07:37.223 "nbd_device": "/dev/nbd1" 00:07:37.223 } 00:07:37.223 ]' 00:07:37.223 11:20:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:37.223 11:20:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:37.223 /dev/nbd1' 00:07:37.223 11:20:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:37.223 /dev/nbd1' 00:07:37.223 11:20:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:37.223 11:20:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:37.223 11:20:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:37.223 11:20:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:37.223 11:20:22 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:37.223 11:20:22 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:37.223 11:20:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:37.223 11:20:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:37.223 11:20:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:37.223 11:20:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:37.223 11:20:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:37.223 11:20:22 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:37.223 256+0 records in 00:07:37.223 256+0 records out 00:07:37.223 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00497951 s, 211 MB/s 00:07:37.223 11:20:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:37.223 11:20:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:37.223 256+0 records in 00:07:37.223 256+0 records out 00:07:37.223 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0265385 s, 39.5 MB/s 00:07:37.223 11:20:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:37.223 11:20:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:37.223 256+0 records in 00:07:37.223 256+0 records out 00:07:37.223 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0293795 s, 35.7 MB/s 00:07:37.223 11:20:22 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:37.223 11:20:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:37.223 11:20:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:37.223 11:20:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:37.223 11:20:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:37.223 11:20:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:37.223 11:20:22 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:37.223 11:20:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:37.223 11:20:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:37.223 11:20:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:37.223 11:20:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:37.223 11:20:22 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:37.224 11:20:22 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:37.224 11:20:22 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:37.224 11:20:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:37.224 11:20:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:37.224 11:20:22 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:37.224 11:20:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:37.224 11:20:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:37.790 11:20:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:37.790 11:20:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:37.790 11:20:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:37.790 11:20:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:37.790 11:20:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:37.790 11:20:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:37.790 11:20:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:37.790 11:20:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:37.790 11:20:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:37.790 11:20:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:38.048 11:20:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:38.048 11:20:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:38.048 11:20:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:38.048 11:20:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:38.048 11:20:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:38.048 11:20:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:38.048 11:20:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:38.048 11:20:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:38.048 11:20:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:38.048 11:20:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:38.048 11:20:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:38.305 11:20:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:38.305 11:20:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:38.305 11:20:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:38.305 11:20:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:38.305 11:20:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:38.305 11:20:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:38.305 11:20:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:38.305 11:20:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:38.305 11:20:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:38.305 11:20:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:38.305 11:20:23 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:38.305 11:20:23 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:38.305 11:20:23 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:38.560 11:20:24 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:38.819 [2024-11-20 11:20:24.175814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:38.819 [2024-11-20 11:20:24.211002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:38.819 [2024-11-20 11:20:24.211014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.819 [2024-11-20 11:20:24.244336] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:38.819 [2024-11-20 11:20:24.244399] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:42.103 spdk_app_start Round 2 00:07:42.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:42.103 11:20:27 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:42.103 11:20:27 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:42.103 11:20:27 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60241 /var/tmp/spdk-nbd.sock 00:07:42.103 11:20:27 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60241 ']' 00:07:42.103 11:20:27 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:42.103 11:20:27 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:42.103 11:20:27 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:42.103 11:20:27 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:42.103 11:20:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:42.103 11:20:27 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:42.103 11:20:27 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:42.103 11:20:27 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:42.361 Malloc0 00:07:42.361 11:20:27 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:42.619 Malloc1 00:07:42.619 11:20:28 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:42.619 11:20:28 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:42.619 11:20:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:42.619 11:20:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:42.619 11:20:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:42.619 11:20:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:42.619 11:20:28 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:42.619 11:20:28 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:42.619 11:20:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:42.619 11:20:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:42.619 11:20:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:42.619 11:20:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:42.619 11:20:28 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:42.619 11:20:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:42.619 11:20:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:42.619 11:20:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:42.877 /dev/nbd0 00:07:42.877 11:20:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:42.877 11:20:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:42.877 11:20:28 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:42.877 11:20:28 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:42.877 11:20:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:42.877 11:20:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:42.877 11:20:28 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:42.877 11:20:28 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:42.877 11:20:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:42.877 11:20:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:42.877 11:20:28 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:42.877 1+0 records in 00:07:42.877 1+0 records out 00:07:42.877 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000289279 s, 14.2 MB/s 00:07:42.877 11:20:28 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:42.877 11:20:28 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:42.877 11:20:28 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:42.877 11:20:28 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:42.877 11:20:28 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:42.877 11:20:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:42.877 11:20:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:42.877 11:20:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:43.443 /dev/nbd1 00:07:43.443 11:20:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:43.443 11:20:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:43.443 11:20:28 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:43.443 11:20:28 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:43.443 11:20:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:43.443 11:20:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:43.443 11:20:28 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:43.443 11:20:28 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:43.443 11:20:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:43.443 11:20:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:43.443 11:20:28 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:43.443 1+0 records in 00:07:43.443 1+0 records out 00:07:43.443 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000511875 s, 8.0 MB/s 00:07:43.443 11:20:28 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:43.443 11:20:28 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:43.443 11:20:28 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:43.443 11:20:28 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:43.443 11:20:28 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:43.443 11:20:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:43.443 11:20:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:43.443 11:20:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:43.443 11:20:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:43.443 11:20:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:43.704 11:20:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:43.704 { 00:07:43.704 "bdev_name": "Malloc0", 00:07:43.704 "nbd_device": "/dev/nbd0" 00:07:43.704 }, 00:07:43.704 { 00:07:43.704 "bdev_name": "Malloc1", 00:07:43.704 "nbd_device": "/dev/nbd1" 00:07:43.704 } 00:07:43.704 ]' 00:07:43.704 11:20:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:43.704 { 00:07:43.704 "bdev_name": "Malloc0", 00:07:43.704 "nbd_device": "/dev/nbd0" 00:07:43.704 }, 00:07:43.704 { 00:07:43.704 "bdev_name": "Malloc1", 00:07:43.704 "nbd_device": "/dev/nbd1" 00:07:43.704 } 00:07:43.704 ]' 00:07:43.704 11:20:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:43.704 11:20:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:43.704 /dev/nbd1' 00:07:43.704 11:20:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:43.704 /dev/nbd1' 00:07:43.704 11:20:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:43.704 11:20:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:43.704 11:20:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:43.704 11:20:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:43.704 11:20:29 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:43.704 11:20:29 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:43.704 11:20:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:43.704 11:20:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:43.704 11:20:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:43.704 11:20:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:43.704 11:20:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:43.704 11:20:29 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:43.704 256+0 records in 00:07:43.704 256+0 records out 00:07:43.704 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00697676 s, 150 MB/s 00:07:43.704 11:20:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:43.704 11:20:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:43.963 256+0 records in 00:07:43.963 256+0 records out 00:07:43.963 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0397318 s, 26.4 MB/s 00:07:43.963 11:20:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:43.963 11:20:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:43.963 256+0 records in 00:07:43.963 256+0 records out 00:07:43.963 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.055768 s, 18.8 MB/s 00:07:43.963 11:20:29 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:43.963 11:20:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:43.963 11:20:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:43.963 11:20:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:43.963 11:20:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:43.963 11:20:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:43.963 11:20:29 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:43.963 11:20:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:43.963 11:20:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:43.963 11:20:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:43.963 11:20:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:43.963 11:20:29 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:43.963 11:20:29 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:43.963 11:20:29 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:43.963 11:20:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:43.963 11:20:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:43.963 11:20:29 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:43.963 11:20:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:43.963 11:20:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:44.221 11:20:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:44.221 11:20:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:44.221 11:20:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:44.221 11:20:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:44.221 11:20:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:44.221 11:20:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:44.221 11:20:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:44.221 11:20:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:44.221 11:20:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:44.221 11:20:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:44.786 11:20:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:44.786 11:20:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:44.787 11:20:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:44.787 11:20:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:44.787 11:20:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:44.787 11:20:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:44.787 11:20:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:44.787 11:20:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:44.787 11:20:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:44.787 11:20:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:44.787 11:20:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:45.044 11:20:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:45.045 11:20:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:45.045 11:20:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:45.045 11:20:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:45.045 11:20:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:45.045 11:20:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:45.045 11:20:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:45.045 11:20:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:45.045 11:20:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:45.045 11:20:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:45.045 11:20:30 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:45.045 11:20:30 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:45.045 11:20:30 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:45.611 11:20:30 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:45.611 [2024-11-20 11:20:31.004215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:45.611 [2024-11-20 11:20:31.038760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:45.611 [2024-11-20 11:20:31.038769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.611 [2024-11-20 11:20:31.069071] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:45.611 [2024-11-20 11:20:31.069142] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:48.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:48.892 11:20:33 event.app_repeat -- event/event.sh@38 -- # waitforlisten 60241 /var/tmp/spdk-nbd.sock 00:07:48.892 11:20:33 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60241 ']' 00:07:48.892 11:20:33 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:48.892 11:20:33 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:48.892 11:20:33 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:48.892 11:20:33 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:48.892 11:20:33 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:48.892 11:20:34 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:48.892 11:20:34 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:48.892 11:20:34 event.app_repeat -- event/event.sh@39 -- # killprocess 60241 00:07:48.892 11:20:34 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 60241 ']' 00:07:48.892 11:20:34 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 60241 00:07:48.892 11:20:34 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:07:48.892 11:20:34 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:48.892 11:20:34 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60241 00:07:48.892 killing process with pid 60241 00:07:48.892 11:20:34 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:48.892 11:20:34 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:48.893 11:20:34 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60241' 00:07:48.893 11:20:34 event.app_repeat -- common/autotest_common.sh@973 -- # kill 60241 00:07:48.893 11:20:34 event.app_repeat -- common/autotest_common.sh@978 -- # wait 60241 00:07:48.893 spdk_app_start is called in Round 0. 00:07:48.893 Shutdown signal received, stop current app iteration 00:07:48.893 Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 reinitialization... 00:07:48.893 spdk_app_start is called in Round 1. 00:07:48.893 Shutdown signal received, stop current app iteration 00:07:48.893 Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 reinitialization... 00:07:48.893 spdk_app_start is called in Round 2. 00:07:48.893 Shutdown signal received, stop current app iteration 00:07:48.893 Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 reinitialization... 00:07:48.893 spdk_app_start is called in Round 3. 00:07:48.893 Shutdown signal received, stop current app iteration 00:07:48.893 ************************************ 00:07:48.893 END TEST app_repeat 00:07:48.893 11:20:34 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:48.893 11:20:34 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:48.893 00:07:48.893 real 0m21.310s 00:07:48.893 user 0m49.794s 00:07:48.893 sys 0m3.372s 00:07:48.893 11:20:34 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:48.893 11:20:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:48.893 ************************************ 00:07:48.893 11:20:34 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:48.893 11:20:34 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:48.893 11:20:34 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:48.893 11:20:34 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:48.893 11:20:34 event -- common/autotest_common.sh@10 -- # set +x 00:07:48.893 ************************************ 00:07:48.893 START TEST cpu_locks 00:07:48.893 ************************************ 00:07:48.893 11:20:34 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:49.151 * Looking for test storage... 00:07:49.151 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:49.151 11:20:34 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:49.151 11:20:34 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:07:49.151 11:20:34 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:49.151 11:20:34 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:49.151 11:20:34 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:49.151 11:20:34 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:49.151 11:20:34 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:49.151 11:20:34 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:49.151 11:20:34 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:49.151 11:20:34 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:49.151 11:20:34 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:49.151 11:20:34 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:49.151 11:20:34 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:49.151 11:20:34 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:49.151 11:20:34 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:49.151 11:20:34 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:49.151 11:20:34 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:49.151 11:20:34 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:49.151 11:20:34 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:49.151 11:20:34 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:49.151 11:20:34 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:49.151 11:20:34 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:49.151 11:20:34 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:49.151 11:20:34 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:49.151 11:20:34 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:49.151 11:20:34 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:49.151 11:20:34 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:49.151 11:20:34 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:49.151 11:20:34 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:49.151 11:20:34 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:49.151 11:20:34 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:49.151 11:20:34 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:49.151 11:20:34 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:49.151 11:20:34 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:49.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.151 --rc genhtml_branch_coverage=1 00:07:49.151 --rc genhtml_function_coverage=1 00:07:49.151 --rc genhtml_legend=1 00:07:49.151 --rc geninfo_all_blocks=1 00:07:49.151 --rc geninfo_unexecuted_blocks=1 00:07:49.151 00:07:49.151 ' 00:07:49.151 11:20:34 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:49.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.151 --rc genhtml_branch_coverage=1 00:07:49.151 --rc genhtml_function_coverage=1 00:07:49.151 --rc genhtml_legend=1 00:07:49.151 --rc geninfo_all_blocks=1 00:07:49.151 --rc geninfo_unexecuted_blocks=1 00:07:49.151 00:07:49.151 ' 00:07:49.151 11:20:34 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:49.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.151 --rc genhtml_branch_coverage=1 00:07:49.151 --rc genhtml_function_coverage=1 00:07:49.151 --rc genhtml_legend=1 00:07:49.151 --rc geninfo_all_blocks=1 00:07:49.151 --rc geninfo_unexecuted_blocks=1 00:07:49.151 00:07:49.151 ' 00:07:49.151 11:20:34 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:49.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.151 --rc genhtml_branch_coverage=1 00:07:49.151 --rc genhtml_function_coverage=1 00:07:49.151 --rc genhtml_legend=1 00:07:49.151 --rc geninfo_all_blocks=1 00:07:49.151 --rc geninfo_unexecuted_blocks=1 00:07:49.151 00:07:49.151 ' 00:07:49.151 11:20:34 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:49.151 11:20:34 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:49.151 11:20:34 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:49.151 11:20:34 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:49.151 11:20:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:49.151 11:20:34 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:49.151 11:20:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:49.151 ************************************ 00:07:49.151 START TEST default_locks 00:07:49.151 ************************************ 00:07:49.151 11:20:34 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:07:49.151 11:20:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60894 00:07:49.151 11:20:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:49.151 11:20:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60894 00:07:49.151 11:20:34 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 60894 ']' 00:07:49.151 11:20:34 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.151 11:20:34 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:49.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.151 11:20:34 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.151 11:20:34 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:49.151 11:20:34 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:49.410 [2024-11-20 11:20:34.756184] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:07:49.410 [2024-11-20 11:20:34.756342] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60894 ] 00:07:49.410 [2024-11-20 11:20:34.906889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.410 [2024-11-20 11:20:34.955633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.667 11:20:35 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:49.667 11:20:35 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:07:49.667 11:20:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60894 00:07:49.667 11:20:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60894 00:07:49.667 11:20:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:50.233 11:20:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60894 00:07:50.233 11:20:35 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 60894 ']' 00:07:50.233 11:20:35 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 60894 00:07:50.233 11:20:35 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:07:50.233 11:20:35 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:50.233 11:20:35 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60894 00:07:50.233 11:20:35 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:50.233 killing process with pid 60894 00:07:50.233 11:20:35 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:50.233 11:20:35 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60894' 00:07:50.233 11:20:35 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 60894 00:07:50.233 11:20:35 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 60894 00:07:50.491 11:20:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60894 00:07:50.491 11:20:35 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:07:50.491 11:20:35 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60894 00:07:50.491 11:20:35 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:50.491 11:20:35 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:50.491 11:20:35 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:50.491 11:20:35 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:50.491 11:20:35 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 60894 00:07:50.491 11:20:35 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 60894 ']' 00:07:50.491 11:20:35 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.491 11:20:35 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:50.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.491 11:20:35 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.491 11:20:35 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:50.491 11:20:35 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:50.491 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60894) - No such process 00:07:50.491 ERROR: process (pid: 60894) is no longer running 00:07:50.491 11:20:35 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:50.491 11:20:35 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:07:50.491 11:20:35 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:07:50.491 11:20:35 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:50.491 11:20:35 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:50.491 11:20:35 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:50.491 11:20:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:50.491 11:20:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:50.491 11:20:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:50.491 11:20:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:50.491 00:07:50.491 real 0m1.295s 00:07:50.491 user 0m1.394s 00:07:50.491 sys 0m0.521s 00:07:50.491 11:20:35 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:50.492 11:20:35 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:50.492 ************************************ 00:07:50.492 END TEST default_locks 00:07:50.492 ************************************ 00:07:50.492 11:20:36 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:50.492 11:20:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:50.492 11:20:36 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:50.492 11:20:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:50.492 ************************************ 00:07:50.492 START TEST default_locks_via_rpc 00:07:50.492 ************************************ 00:07:50.492 11:20:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:07:50.492 11:20:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60945 00:07:50.492 11:20:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:50.492 11:20:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60945 00:07:50.492 11:20:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60945 ']' 00:07:50.492 11:20:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.492 11:20:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:50.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.492 11:20:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.492 11:20:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:50.492 11:20:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:50.751 [2024-11-20 11:20:36.103254] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:07:50.751 [2024-11-20 11:20:36.103399] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60945 ] 00:07:50.751 [2024-11-20 11:20:36.253219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.751 [2024-11-20 11:20:36.291239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.010 11:20:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:51.010 11:20:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:51.010 11:20:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:51.010 11:20:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.010 11:20:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.010 11:20:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.010 11:20:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:51.010 11:20:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:51.010 11:20:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:51.010 11:20:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:51.010 11:20:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:51.010 11:20:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.010 11:20:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.010 11:20:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.010 11:20:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60945 00:07:51.010 11:20:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60945 00:07:51.010 11:20:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:51.269 11:20:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60945 00:07:51.528 11:20:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 60945 ']' 00:07:51.528 11:20:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 60945 00:07:51.528 11:20:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:07:51.528 11:20:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:51.528 11:20:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60945 00:07:51.528 11:20:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:51.528 11:20:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:51.528 killing process with pid 60945 00:07:51.528 11:20:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60945' 00:07:51.528 11:20:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 60945 00:07:51.528 11:20:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 60945 00:07:51.787 00:07:51.787 real 0m1.137s 00:07:51.787 user 0m1.189s 00:07:51.787 sys 0m0.498s 00:07:51.787 11:20:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:51.787 11:20:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.787 ************************************ 00:07:51.787 END TEST default_locks_via_rpc 00:07:51.787 ************************************ 00:07:51.787 11:20:37 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:51.787 11:20:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:51.787 11:20:37 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:51.787 11:20:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:51.787 ************************************ 00:07:51.787 START TEST non_locking_app_on_locked_coremask 00:07:51.787 ************************************ 00:07:51.787 11:20:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:07:51.787 11:20:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60995 00:07:51.787 11:20:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:51.787 11:20:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60995 /var/tmp/spdk.sock 00:07:51.787 11:20:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60995 ']' 00:07:51.787 11:20:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.787 11:20:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:51.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.787 11:20:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.787 11:20:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:51.787 11:20:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:51.787 [2024-11-20 11:20:37.286840] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:07:51.787 [2024-11-20 11:20:37.287025] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60995 ] 00:07:52.045 [2024-11-20 11:20:37.437122] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.045 [2024-11-20 11:20:37.487637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.304 11:20:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:52.304 11:20:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:52.304 11:20:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=61008 00:07:52.304 11:20:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 61008 /var/tmp/spdk2.sock 00:07:52.304 11:20:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:52.304 11:20:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61008 ']' 00:07:52.304 11:20:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:52.304 11:20:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:52.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:52.304 11:20:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:52.304 11:20:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:52.304 11:20:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:52.304 [2024-11-20 11:20:37.773099] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:07:52.304 [2024-11-20 11:20:37.773234] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61008 ] 00:07:52.562 [2024-11-20 11:20:37.946792] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:52.562 [2024-11-20 11:20:37.946881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.562 [2024-11-20 11:20:38.027715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.521 11:20:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:53.521 11:20:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:53.521 11:20:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60995 00:07:53.521 11:20:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60995 00:07:53.521 11:20:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:54.470 11:20:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60995 00:07:54.470 11:20:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60995 ']' 00:07:54.470 11:20:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60995 00:07:54.470 11:20:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:54.470 11:20:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:54.470 11:20:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60995 00:07:54.470 11:20:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:54.470 11:20:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:54.470 killing process with pid 60995 00:07:54.470 11:20:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60995' 00:07:54.470 11:20:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60995 00:07:54.470 11:20:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60995 00:07:55.037 11:20:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 61008 00:07:55.037 11:20:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61008 ']' 00:07:55.037 11:20:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 61008 00:07:55.037 11:20:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:55.037 11:20:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:55.037 11:20:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61008 00:07:55.037 11:20:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:55.037 killing process with pid 61008 00:07:55.037 11:20:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:55.037 11:20:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61008' 00:07:55.037 11:20:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 61008 00:07:55.037 11:20:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 61008 00:07:55.037 ************************************ 00:07:55.037 END TEST non_locking_app_on_locked_coremask 00:07:55.037 ************************************ 00:07:55.037 00:07:55.037 real 0m3.399s 00:07:55.037 user 0m4.086s 00:07:55.037 sys 0m0.980s 00:07:55.037 11:20:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:55.037 11:20:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:55.296 11:20:40 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:55.296 11:20:40 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:55.296 11:20:40 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:55.296 11:20:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:55.296 ************************************ 00:07:55.296 START TEST locking_app_on_unlocked_coremask 00:07:55.296 ************************************ 00:07:55.296 11:20:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:07:55.296 11:20:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=61083 00:07:55.296 11:20:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:55.296 11:20:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 61083 /var/tmp/spdk.sock 00:07:55.296 11:20:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61083 ']' 00:07:55.296 11:20:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:55.296 11:20:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:55.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:55.296 11:20:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:55.296 11:20:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:55.296 11:20:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:55.296 [2024-11-20 11:20:40.722577] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:07:55.296 [2024-11-20 11:20:40.722757] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61083 ] 00:07:55.296 [2024-11-20 11:20:40.875680] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:55.296 [2024-11-20 11:20:40.875756] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.555 [2024-11-20 11:20:40.913169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.555 11:20:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:55.555 11:20:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:55.555 11:20:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=61103 00:07:55.555 11:20:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:55.555 11:20:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 61103 /var/tmp/spdk2.sock 00:07:55.555 11:20:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61103 ']' 00:07:55.555 11:20:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:55.555 11:20:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:55.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:55.555 11:20:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:55.555 11:20:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:55.555 11:20:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:55.813 [2024-11-20 11:20:41.166572] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:07:55.813 [2024-11-20 11:20:41.166678] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61103 ] 00:07:55.813 [2024-11-20 11:20:41.337831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.072 [2024-11-20 11:20:41.437507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.007 11:20:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:57.007 11:20:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:57.007 11:20:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 61103 00:07:57.007 11:20:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61103 00:07:57.007 11:20:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:57.941 11:20:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 61083 00:07:57.941 11:20:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61083 ']' 00:07:57.941 11:20:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 61083 00:07:57.941 11:20:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:57.941 11:20:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:57.941 11:20:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61083 00:07:57.941 11:20:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:57.941 killing process with pid 61083 00:07:57.941 11:20:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:57.941 11:20:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61083' 00:07:57.941 11:20:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 61083 00:07:57.941 11:20:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 61083 00:07:58.200 11:20:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 61103 00:07:58.200 11:20:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61103 ']' 00:07:58.200 11:20:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 61103 00:07:58.200 11:20:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:58.200 11:20:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:58.200 11:20:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61103 00:07:58.200 11:20:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:58.200 11:20:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:58.200 11:20:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61103' 00:07:58.200 killing process with pid 61103 00:07:58.200 11:20:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 61103 00:07:58.200 11:20:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 61103 00:07:58.768 00:07:58.768 real 0m3.427s 00:07:58.768 user 0m4.112s 00:07:58.768 sys 0m0.940s 00:07:58.768 11:20:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:58.768 11:20:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:58.768 ************************************ 00:07:58.768 END TEST locking_app_on_unlocked_coremask 00:07:58.768 ************************************ 00:07:58.768 11:20:44 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:58.768 11:20:44 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:58.768 11:20:44 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:58.768 11:20:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:58.768 ************************************ 00:07:58.768 START TEST locking_app_on_locked_coremask 00:07:58.768 ************************************ 00:07:58.768 11:20:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:07:58.768 11:20:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=61182 00:07:58.768 11:20:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:58.768 11:20:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 61182 /var/tmp/spdk.sock 00:07:58.768 11:20:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61182 ']' 00:07:58.768 11:20:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:58.768 11:20:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:58.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:58.768 11:20:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:58.768 11:20:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:58.768 11:20:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:58.768 [2024-11-20 11:20:44.205721] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:07:58.768 [2024-11-20 11:20:44.205864] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61182 ] 00:07:59.027 [2024-11-20 11:20:44.359603] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.027 [2024-11-20 11:20:44.393899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.027 11:20:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:59.027 11:20:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:59.027 11:20:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=61191 00:07:59.027 11:20:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 61191 /var/tmp/spdk2.sock 00:07:59.027 11:20:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:59.027 11:20:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:59.027 11:20:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 61191 /var/tmp/spdk2.sock 00:07:59.027 11:20:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:59.027 11:20:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:59.027 11:20:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:59.027 11:20:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:59.027 11:20:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 61191 /var/tmp/spdk2.sock 00:07:59.027 11:20:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61191 ']' 00:07:59.027 11:20:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:59.027 11:20:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:59.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:59.027 11:20:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:59.027 11:20:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:59.027 11:20:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:59.287 [2024-11-20 11:20:44.662350] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:07:59.287 [2024-11-20 11:20:44.662487] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61191 ] 00:07:59.287 [2024-11-20 11:20:44.840974] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 61182 has claimed it. 00:07:59.287 [2024-11-20 11:20:44.841083] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:00.223 ERROR: process (pid: 61191) is no longer running 00:08:00.223 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (61191) - No such process 00:08:00.223 11:20:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:00.223 11:20:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:08:00.223 11:20:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:08:00.223 11:20:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:00.223 11:20:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:00.223 11:20:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:00.223 11:20:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 61182 00:08:00.223 11:20:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61182 00:08:00.223 11:20:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:00.482 11:20:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 61182 00:08:00.482 11:20:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61182 ']' 00:08:00.482 11:20:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 61182 00:08:00.482 11:20:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:00.482 11:20:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:00.482 11:20:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61182 00:08:00.482 killing process with pid 61182 00:08:00.482 11:20:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:00.482 11:20:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:00.482 11:20:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61182' 00:08:00.482 11:20:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 61182 00:08:00.482 11:20:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 61182 00:08:01.049 00:08:01.049 real 0m2.249s 00:08:01.049 user 0m2.766s 00:08:01.049 sys 0m0.618s 00:08:01.049 11:20:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:01.049 11:20:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:01.049 ************************************ 00:08:01.049 END TEST locking_app_on_locked_coremask 00:08:01.049 ************************************ 00:08:01.049 11:20:46 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:08:01.049 11:20:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:01.049 11:20:46 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:01.049 11:20:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:01.049 ************************************ 00:08:01.049 START TEST locking_overlapped_coremask 00:08:01.049 ************************************ 00:08:01.049 11:20:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:08:01.049 11:20:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=61248 00:08:01.049 11:20:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 61248 /var/tmp/spdk.sock 00:08:01.049 11:20:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:08:01.049 11:20:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 61248 ']' 00:08:01.049 11:20:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:01.049 11:20:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:01.049 11:20:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:01.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:01.049 11:20:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:01.049 11:20:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:01.049 [2024-11-20 11:20:46.486652] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:08:01.049 [2024-11-20 11:20:46.486785] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61248 ] 00:08:01.308 [2024-11-20 11:20:46.648815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:01.308 [2024-11-20 11:20:46.688676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:01.308 [2024-11-20 11:20:46.688758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:01.308 [2024-11-20 11:20:46.688768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.308 11:20:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:01.308 11:20:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:01.308 11:20:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=61259 00:08:01.308 11:20:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:08:01.308 11:20:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 61259 /var/tmp/spdk2.sock 00:08:01.308 11:20:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:08:01.308 11:20:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 61259 /var/tmp/spdk2.sock 00:08:01.308 11:20:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:08:01.566 11:20:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:01.566 11:20:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:08:01.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:01.566 11:20:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:01.566 11:20:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 61259 /var/tmp/spdk2.sock 00:08:01.566 11:20:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 61259 ']' 00:08:01.566 11:20:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:01.566 11:20:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:01.566 11:20:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:01.566 11:20:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:01.566 11:20:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:01.566 [2024-11-20 11:20:46.981556] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:08:01.566 [2024-11-20 11:20:46.996982] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61259 ] 00:08:01.824 [2024-11-20 11:20:47.190561] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61248 has claimed it. 00:08:01.824 [2024-11-20 11:20:47.190688] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:02.389 ERROR: process (pid: 61259) is no longer running 00:08:02.389 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (61259) - No such process 00:08:02.389 11:20:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:02.389 11:20:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:08:02.389 11:20:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:08:02.389 11:20:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:02.389 11:20:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:02.389 11:20:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:02.389 11:20:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:08:02.389 11:20:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:02.389 11:20:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:02.389 11:20:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:02.389 11:20:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 61248 00:08:02.389 11:20:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 61248 ']' 00:08:02.389 11:20:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 61248 00:08:02.390 11:20:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:08:02.390 11:20:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:02.390 11:20:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61248 00:08:02.390 11:20:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:02.390 killing process with pid 61248 00:08:02.390 11:20:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:02.390 11:20:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61248' 00:08:02.390 11:20:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 61248 00:08:02.390 11:20:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 61248 00:08:02.648 00:08:02.648 real 0m1.715s 00:08:02.648 user 0m4.827s 00:08:02.648 sys 0m0.370s 00:08:02.648 11:20:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:02.648 11:20:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:02.648 ************************************ 00:08:02.648 END TEST locking_overlapped_coremask 00:08:02.648 ************************************ 00:08:02.648 11:20:48 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:08:02.648 11:20:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:02.648 11:20:48 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:02.648 11:20:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:02.648 ************************************ 00:08:02.648 START TEST locking_overlapped_coremask_via_rpc 00:08:02.648 ************************************ 00:08:02.648 11:20:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:08:02.648 11:20:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=61311 00:08:02.648 11:20:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:08:02.649 11:20:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 61311 /var/tmp/spdk.sock 00:08:02.649 11:20:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61311 ']' 00:08:02.649 11:20:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:02.649 11:20:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:02.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:02.649 11:20:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:02.649 11:20:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:02.649 11:20:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.907 [2024-11-20 11:20:48.243503] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:08:02.907 [2024-11-20 11:20:48.243677] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61311 ] 00:08:02.907 [2024-11-20 11:20:48.393558] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:02.907 [2024-11-20 11:20:48.393642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:02.907 [2024-11-20 11:20:48.442162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:02.907 [2024-11-20 11:20:48.442331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:02.907 [2024-11-20 11:20:48.442343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.165 11:20:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:03.165 11:20:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:03.165 11:20:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=61327 00:08:03.165 11:20:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 61327 /var/tmp/spdk2.sock 00:08:03.165 11:20:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:08:03.165 11:20:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61327 ']' 00:08:03.165 11:20:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:03.165 11:20:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:03.165 11:20:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:03.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:03.165 11:20:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:03.165 11:20:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:03.165 [2024-11-20 11:20:48.706510] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:08:03.165 [2024-11-20 11:20:48.706652] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61327 ] 00:08:03.424 [2024-11-20 11:20:48.889415] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:03.424 [2024-11-20 11:20:48.889515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:03.424 [2024-11-20 11:20:48.975520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:03.424 [2024-11-20 11:20:48.982680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:03.424 [2024-11-20 11:20:48.982688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:03.998 11:20:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:03.998 11:20:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:03.998 11:20:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:08:03.998 11:20:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.998 11:20:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:03.998 11:20:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.998 11:20:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:03.998 11:20:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:08:03.998 11:20:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:03.998 11:20:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:03.998 11:20:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:03.998 11:20:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:03.998 11:20:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:03.998 11:20:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:03.998 11:20:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.998 11:20:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:03.998 [2024-11-20 11:20:49.349919] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61311 has claimed it. 00:08:03.998 2024/11/20 11:20:49 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:08:03.998 request: 00:08:03.998 { 00:08:03.998 "method": "framework_enable_cpumask_locks", 00:08:03.998 "params": {} 00:08:03.998 } 00:08:03.998 Got JSON-RPC error response 00:08:03.998 GoRPCClient: error on JSON-RPC call 00:08:03.998 11:20:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:03.998 11:20:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:08:03.998 11:20:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:03.998 11:20:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:03.998 11:20:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:03.998 11:20:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 61311 /var/tmp/spdk.sock 00:08:03.998 11:20:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61311 ']' 00:08:03.998 11:20:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.998 11:20:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:03.998 11:20:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.998 11:20:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:03.998 11:20:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:04.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:04.256 11:20:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:04.256 11:20:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:04.256 11:20:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 61327 /var/tmp/spdk2.sock 00:08:04.256 11:20:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61327 ']' 00:08:04.256 11:20:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:04.256 11:20:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:04.256 11:20:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:04.256 11:20:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:04.256 11:20:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:04.825 ************************************ 00:08:04.825 END TEST locking_overlapped_coremask_via_rpc 00:08:04.825 ************************************ 00:08:04.825 11:20:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:04.825 11:20:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:04.825 11:20:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:08:04.825 11:20:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:04.825 11:20:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:04.825 11:20:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:04.825 00:08:04.825 real 0m2.075s 00:08:04.825 user 0m1.489s 00:08:04.825 sys 0m0.156s 00:08:04.825 11:20:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:04.825 11:20:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:04.825 11:20:50 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:08:04.825 11:20:50 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61311 ]] 00:08:04.825 11:20:50 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61311 00:08:04.825 11:20:50 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61311 ']' 00:08:04.825 11:20:50 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61311 00:08:04.825 11:20:50 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:08:04.825 11:20:50 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:04.825 11:20:50 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61311 00:08:04.825 killing process with pid 61311 00:08:04.825 11:20:50 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:04.825 11:20:50 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:04.825 11:20:50 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61311' 00:08:04.825 11:20:50 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 61311 00:08:04.825 11:20:50 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 61311 00:08:05.084 11:20:50 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61327 ]] 00:08:05.084 11:20:50 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61327 00:08:05.084 11:20:50 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61327 ']' 00:08:05.084 11:20:50 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61327 00:08:05.084 11:20:50 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:08:05.084 11:20:50 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:05.084 11:20:50 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61327 00:08:05.084 killing process with pid 61327 00:08:05.084 11:20:50 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:08:05.084 11:20:50 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:08:05.084 11:20:50 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61327' 00:08:05.084 11:20:50 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 61327 00:08:05.084 11:20:50 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 61327 00:08:05.342 11:20:50 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:05.342 11:20:50 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:08:05.342 11:20:50 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61311 ]] 00:08:05.342 11:20:50 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61311 00:08:05.342 11:20:50 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61311 ']' 00:08:05.342 11:20:50 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61311 00:08:05.342 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (61311) - No such process 00:08:05.342 Process with pid 61311 is not found 00:08:05.342 11:20:50 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 61311 is not found' 00:08:05.342 11:20:50 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61327 ]] 00:08:05.342 11:20:50 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61327 00:08:05.342 11:20:50 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61327 ']' 00:08:05.342 11:20:50 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61327 00:08:05.342 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (61327) - No such process 00:08:05.342 Process with pid 61327 is not found 00:08:05.342 11:20:50 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 61327 is not found' 00:08:05.342 11:20:50 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:05.342 00:08:05.342 real 0m16.421s 00:08:05.342 user 0m30.046s 00:08:05.342 sys 0m4.778s 00:08:05.342 11:20:50 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:05.342 11:20:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:05.342 ************************************ 00:08:05.342 END TEST cpu_locks 00:08:05.342 ************************************ 00:08:05.342 00:08:05.342 real 0m45.989s 00:08:05.342 user 1m32.976s 00:08:05.342 sys 0m8.814s 00:08:05.342 11:20:50 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:05.342 11:20:50 event -- common/autotest_common.sh@10 -- # set +x 00:08:05.342 ************************************ 00:08:05.342 END TEST event 00:08:05.342 ************************************ 00:08:05.601 11:20:50 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:05.601 11:20:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:05.601 11:20:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:05.601 11:20:50 -- common/autotest_common.sh@10 -- # set +x 00:08:05.601 ************************************ 00:08:05.601 START TEST thread 00:08:05.601 ************************************ 00:08:05.601 11:20:50 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:05.601 * Looking for test storage... 00:08:05.601 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:08:05.601 11:20:51 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:05.601 11:20:51 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:05.601 11:20:51 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:08:05.601 11:20:51 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:05.601 11:20:51 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:05.601 11:20:51 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:05.601 11:20:51 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:05.601 11:20:51 thread -- scripts/common.sh@336 -- # IFS=.-: 00:08:05.601 11:20:51 thread -- scripts/common.sh@336 -- # read -ra ver1 00:08:05.601 11:20:51 thread -- scripts/common.sh@337 -- # IFS=.-: 00:08:05.601 11:20:51 thread -- scripts/common.sh@337 -- # read -ra ver2 00:08:05.601 11:20:51 thread -- scripts/common.sh@338 -- # local 'op=<' 00:08:05.601 11:20:51 thread -- scripts/common.sh@340 -- # ver1_l=2 00:08:05.601 11:20:51 thread -- scripts/common.sh@341 -- # ver2_l=1 00:08:05.601 11:20:51 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:05.601 11:20:51 thread -- scripts/common.sh@344 -- # case "$op" in 00:08:05.601 11:20:51 thread -- scripts/common.sh@345 -- # : 1 00:08:05.601 11:20:51 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:05.601 11:20:51 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:05.601 11:20:51 thread -- scripts/common.sh@365 -- # decimal 1 00:08:05.601 11:20:51 thread -- scripts/common.sh@353 -- # local d=1 00:08:05.601 11:20:51 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:05.601 11:20:51 thread -- scripts/common.sh@355 -- # echo 1 00:08:05.601 11:20:51 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:08:05.601 11:20:51 thread -- scripts/common.sh@366 -- # decimal 2 00:08:05.601 11:20:51 thread -- scripts/common.sh@353 -- # local d=2 00:08:05.601 11:20:51 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:05.601 11:20:51 thread -- scripts/common.sh@355 -- # echo 2 00:08:05.601 11:20:51 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:08:05.601 11:20:51 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:05.601 11:20:51 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:05.601 11:20:51 thread -- scripts/common.sh@368 -- # return 0 00:08:05.601 11:20:51 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:05.601 11:20:51 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:05.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.601 --rc genhtml_branch_coverage=1 00:08:05.601 --rc genhtml_function_coverage=1 00:08:05.601 --rc genhtml_legend=1 00:08:05.601 --rc geninfo_all_blocks=1 00:08:05.601 --rc geninfo_unexecuted_blocks=1 00:08:05.601 00:08:05.601 ' 00:08:05.601 11:20:51 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:05.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.601 --rc genhtml_branch_coverage=1 00:08:05.601 --rc genhtml_function_coverage=1 00:08:05.601 --rc genhtml_legend=1 00:08:05.601 --rc geninfo_all_blocks=1 00:08:05.601 --rc geninfo_unexecuted_blocks=1 00:08:05.601 00:08:05.601 ' 00:08:05.601 11:20:51 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:05.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.601 --rc genhtml_branch_coverage=1 00:08:05.601 --rc genhtml_function_coverage=1 00:08:05.601 --rc genhtml_legend=1 00:08:05.601 --rc geninfo_all_blocks=1 00:08:05.601 --rc geninfo_unexecuted_blocks=1 00:08:05.601 00:08:05.601 ' 00:08:05.601 11:20:51 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:05.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.601 --rc genhtml_branch_coverage=1 00:08:05.601 --rc genhtml_function_coverage=1 00:08:05.601 --rc genhtml_legend=1 00:08:05.601 --rc geninfo_all_blocks=1 00:08:05.601 --rc geninfo_unexecuted_blocks=1 00:08:05.601 00:08:05.601 ' 00:08:05.601 11:20:51 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:05.601 11:20:51 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:08:05.601 11:20:51 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:05.601 11:20:51 thread -- common/autotest_common.sh@10 -- # set +x 00:08:05.601 ************************************ 00:08:05.601 START TEST thread_poller_perf 00:08:05.601 ************************************ 00:08:05.601 11:20:51 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:05.601 [2024-11-20 11:20:51.143802] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:08:05.601 [2024-11-20 11:20:51.143926] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61468 ] 00:08:05.859 [2024-11-20 11:20:51.293181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.859 [2024-11-20 11:20:51.343066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.859 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:08:06.915 [2024-11-20T11:20:52.504Z] ====================================== 00:08:06.915 [2024-11-20T11:20:52.504Z] busy:2212371541 (cyc) 00:08:06.915 [2024-11-20T11:20:52.504Z] total_run_count: 272000 00:08:06.915 [2024-11-20T11:20:52.504Z] tsc_hz: 2200000000 (cyc) 00:08:06.915 [2024-11-20T11:20:52.504Z] ====================================== 00:08:06.915 [2024-11-20T11:20:52.504Z] poller_cost: 8133 (cyc), 3696 (nsec) 00:08:06.915 00:08:06.915 real 0m1.273s 00:08:06.915 user 0m1.116s 00:08:06.915 sys 0m0.045s 00:08:06.915 11:20:52 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:06.915 11:20:52 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:06.915 ************************************ 00:08:06.915 END TEST thread_poller_perf 00:08:06.915 ************************************ 00:08:06.915 11:20:52 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:06.915 11:20:52 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:08:06.915 11:20:52 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:06.915 11:20:52 thread -- common/autotest_common.sh@10 -- # set +x 00:08:06.915 ************************************ 00:08:06.915 START TEST thread_poller_perf 00:08:06.915 ************************************ 00:08:06.915 11:20:52 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:06.915 [2024-11-20 11:20:52.462196] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:08:06.915 [2024-11-20 11:20:52.462351] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61509 ] 00:08:07.174 [2024-11-20 11:20:52.612469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.174 [2024-11-20 11:20:52.662935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.174 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:08:08.549 [2024-11-20T11:20:54.138Z] ====================================== 00:08:08.549 [2024-11-20T11:20:54.138Z] busy:2202818027 (cyc) 00:08:08.549 [2024-11-20T11:20:54.138Z] total_run_count: 3457000 00:08:08.549 [2024-11-20T11:20:54.138Z] tsc_hz: 2200000000 (cyc) 00:08:08.549 [2024-11-20T11:20:54.138Z] ====================================== 00:08:08.549 [2024-11-20T11:20:54.138Z] poller_cost: 637 (cyc), 289 (nsec) 00:08:08.549 00:08:08.549 real 0m1.292s 00:08:08.549 user 0m1.137s 00:08:08.549 sys 0m0.041s 00:08:08.549 11:20:53 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:08.549 11:20:53 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:08.549 ************************************ 00:08:08.549 END TEST thread_poller_perf 00:08:08.549 ************************************ 00:08:08.549 11:20:53 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:08:08.549 00:08:08.549 real 0m2.813s 00:08:08.549 user 0m2.389s 00:08:08.549 sys 0m0.200s 00:08:08.549 ************************************ 00:08:08.549 END TEST thread 00:08:08.549 ************************************ 00:08:08.549 11:20:53 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:08.549 11:20:53 thread -- common/autotest_common.sh@10 -- # set +x 00:08:08.549 11:20:53 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:08:08.549 11:20:53 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:08.549 11:20:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:08.549 11:20:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:08.549 11:20:53 -- common/autotest_common.sh@10 -- # set +x 00:08:08.549 ************************************ 00:08:08.549 START TEST app_cmdline 00:08:08.549 ************************************ 00:08:08.549 11:20:53 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:08.549 * Looking for test storage... 00:08:08.549 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:08.549 11:20:53 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:08.549 11:20:53 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:08:08.549 11:20:53 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:08.549 11:20:54 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:08.549 11:20:54 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:08.549 11:20:54 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:08.549 11:20:54 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:08.549 11:20:54 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:08:08.549 11:20:54 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:08:08.549 11:20:54 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:08:08.549 11:20:54 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:08:08.549 11:20:54 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:08:08.549 11:20:54 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:08:08.549 11:20:54 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:08:08.549 11:20:54 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:08.549 11:20:54 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:08:08.549 11:20:54 app_cmdline -- scripts/common.sh@345 -- # : 1 00:08:08.549 11:20:54 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:08.549 11:20:54 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:08.549 11:20:54 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:08:08.549 11:20:54 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:08:08.549 11:20:54 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:08.549 11:20:54 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:08:08.549 11:20:54 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:08:08.549 11:20:54 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:08:08.549 11:20:54 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:08:08.549 11:20:54 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:08.549 11:20:54 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:08:08.549 11:20:54 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:08:08.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:08.550 11:20:54 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:08.550 11:20:54 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:08.550 11:20:54 app_cmdline -- scripts/common.sh@368 -- # return 0 00:08:08.550 11:20:54 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:08.550 11:20:54 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:08.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.550 --rc genhtml_branch_coverage=1 00:08:08.550 --rc genhtml_function_coverage=1 00:08:08.550 --rc genhtml_legend=1 00:08:08.550 --rc geninfo_all_blocks=1 00:08:08.550 --rc geninfo_unexecuted_blocks=1 00:08:08.550 00:08:08.550 ' 00:08:08.550 11:20:54 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:08.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.550 --rc genhtml_branch_coverage=1 00:08:08.550 --rc genhtml_function_coverage=1 00:08:08.550 --rc genhtml_legend=1 00:08:08.550 --rc geninfo_all_blocks=1 00:08:08.550 --rc geninfo_unexecuted_blocks=1 00:08:08.550 00:08:08.550 ' 00:08:08.550 11:20:54 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:08.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.550 --rc genhtml_branch_coverage=1 00:08:08.550 --rc genhtml_function_coverage=1 00:08:08.550 --rc genhtml_legend=1 00:08:08.550 --rc geninfo_all_blocks=1 00:08:08.550 --rc geninfo_unexecuted_blocks=1 00:08:08.550 00:08:08.550 ' 00:08:08.550 11:20:54 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:08.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.550 --rc genhtml_branch_coverage=1 00:08:08.550 --rc genhtml_function_coverage=1 00:08:08.550 --rc genhtml_legend=1 00:08:08.550 --rc geninfo_all_blocks=1 00:08:08.550 --rc geninfo_unexecuted_blocks=1 00:08:08.550 00:08:08.550 ' 00:08:08.550 11:20:54 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:08.550 11:20:54 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=61586 00:08:08.550 11:20:54 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 61586 00:08:08.550 11:20:54 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:08.550 11:20:54 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 61586 ']' 00:08:08.550 11:20:54 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:08.550 11:20:54 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:08.550 11:20:54 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:08.550 11:20:54 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:08.550 11:20:54 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:08.550 [2024-11-20 11:20:54.117552] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:08:08.550 [2024-11-20 11:20:54.118348] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61586 ] 00:08:08.808 [2024-11-20 11:20:54.270666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.808 [2024-11-20 11:20:54.319706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.067 11:20:54 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:09.067 11:20:54 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:08:09.067 11:20:54 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:08:09.325 { 00:08:09.325 "fields": { 00:08:09.325 "commit": "557f022f6", 00:08:09.325 "major": 25, 00:08:09.325 "minor": 1, 00:08:09.325 "patch": 0, 00:08:09.325 "suffix": "-pre" 00:08:09.325 }, 00:08:09.325 "version": "SPDK v25.01-pre git sha1 557f022f6" 00:08:09.325 } 00:08:09.584 11:20:54 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:09.584 11:20:54 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:09.584 11:20:54 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:09.584 11:20:54 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:09.584 11:20:54 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:09.584 11:20:54 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:09.584 11:20:54 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:09.584 11:20:54 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.584 11:20:54 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:09.584 11:20:54 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.584 11:20:54 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:09.584 11:20:54 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:09.584 11:20:54 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:09.584 11:20:54 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:08:09.584 11:20:54 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:09.584 11:20:54 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:09.584 11:20:54 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:09.584 11:20:54 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:09.584 11:20:54 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:09.584 11:20:54 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:09.584 11:20:54 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:09.584 11:20:54 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:09.584 11:20:54 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:09.584 11:20:54 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:09.843 2024/11/20 11:20:55 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:08:09.843 request: 00:08:09.843 { 00:08:09.843 "method": "env_dpdk_get_mem_stats", 00:08:09.843 "params": {} 00:08:09.843 } 00:08:09.843 Got JSON-RPC error response 00:08:09.843 GoRPCClient: error on JSON-RPC call 00:08:09.843 11:20:55 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:08:09.843 11:20:55 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:09.843 11:20:55 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:09.843 11:20:55 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:09.843 11:20:55 app_cmdline -- app/cmdline.sh@1 -- # killprocess 61586 00:08:09.843 11:20:55 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 61586 ']' 00:08:09.843 11:20:55 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 61586 00:08:09.843 11:20:55 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:08:09.843 11:20:55 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:09.843 11:20:55 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61586 00:08:09.843 11:20:55 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:09.844 11:20:55 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:09.844 killing process with pid 61586 00:08:09.844 11:20:55 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61586' 00:08:09.844 11:20:55 app_cmdline -- common/autotest_common.sh@973 -- # kill 61586 00:08:09.844 11:20:55 app_cmdline -- common/autotest_common.sh@978 -- # wait 61586 00:08:10.103 00:08:10.103 real 0m1.838s 00:08:10.103 user 0m2.556s 00:08:10.103 sys 0m0.450s 00:08:10.103 11:20:55 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:10.103 11:20:55 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:10.103 ************************************ 00:08:10.103 END TEST app_cmdline 00:08:10.103 ************************************ 00:08:10.361 11:20:55 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:10.361 11:20:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:10.361 11:20:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:10.361 11:20:55 -- common/autotest_common.sh@10 -- # set +x 00:08:10.361 ************************************ 00:08:10.361 START TEST version 00:08:10.361 ************************************ 00:08:10.361 11:20:55 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:10.361 * Looking for test storage... 00:08:10.361 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:10.361 11:20:55 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:10.361 11:20:55 version -- common/autotest_common.sh@1693 -- # lcov --version 00:08:10.361 11:20:55 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:10.361 11:20:55 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:10.361 11:20:55 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:10.361 11:20:55 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:10.361 11:20:55 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:10.361 11:20:55 version -- scripts/common.sh@336 -- # IFS=.-: 00:08:10.361 11:20:55 version -- scripts/common.sh@336 -- # read -ra ver1 00:08:10.361 11:20:55 version -- scripts/common.sh@337 -- # IFS=.-: 00:08:10.361 11:20:55 version -- scripts/common.sh@337 -- # read -ra ver2 00:08:10.361 11:20:55 version -- scripts/common.sh@338 -- # local 'op=<' 00:08:10.361 11:20:55 version -- scripts/common.sh@340 -- # ver1_l=2 00:08:10.361 11:20:55 version -- scripts/common.sh@341 -- # ver2_l=1 00:08:10.361 11:20:55 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:10.361 11:20:55 version -- scripts/common.sh@344 -- # case "$op" in 00:08:10.361 11:20:55 version -- scripts/common.sh@345 -- # : 1 00:08:10.361 11:20:55 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:10.362 11:20:55 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:10.362 11:20:55 version -- scripts/common.sh@365 -- # decimal 1 00:08:10.362 11:20:55 version -- scripts/common.sh@353 -- # local d=1 00:08:10.362 11:20:55 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:10.362 11:20:55 version -- scripts/common.sh@355 -- # echo 1 00:08:10.362 11:20:55 version -- scripts/common.sh@365 -- # ver1[v]=1 00:08:10.362 11:20:55 version -- scripts/common.sh@366 -- # decimal 2 00:08:10.362 11:20:55 version -- scripts/common.sh@353 -- # local d=2 00:08:10.362 11:20:55 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:10.362 11:20:55 version -- scripts/common.sh@355 -- # echo 2 00:08:10.362 11:20:55 version -- scripts/common.sh@366 -- # ver2[v]=2 00:08:10.362 11:20:55 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:10.362 11:20:55 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:10.362 11:20:55 version -- scripts/common.sh@368 -- # return 0 00:08:10.362 11:20:55 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:10.362 11:20:55 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:10.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.362 --rc genhtml_branch_coverage=1 00:08:10.362 --rc genhtml_function_coverage=1 00:08:10.362 --rc genhtml_legend=1 00:08:10.362 --rc geninfo_all_blocks=1 00:08:10.362 --rc geninfo_unexecuted_blocks=1 00:08:10.362 00:08:10.362 ' 00:08:10.362 11:20:55 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:10.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.362 --rc genhtml_branch_coverage=1 00:08:10.362 --rc genhtml_function_coverage=1 00:08:10.362 --rc genhtml_legend=1 00:08:10.362 --rc geninfo_all_blocks=1 00:08:10.362 --rc geninfo_unexecuted_blocks=1 00:08:10.362 00:08:10.362 ' 00:08:10.362 11:20:55 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:10.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.362 --rc genhtml_branch_coverage=1 00:08:10.362 --rc genhtml_function_coverage=1 00:08:10.362 --rc genhtml_legend=1 00:08:10.362 --rc geninfo_all_blocks=1 00:08:10.362 --rc geninfo_unexecuted_blocks=1 00:08:10.362 00:08:10.362 ' 00:08:10.362 11:20:55 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:10.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.362 --rc genhtml_branch_coverage=1 00:08:10.362 --rc genhtml_function_coverage=1 00:08:10.362 --rc genhtml_legend=1 00:08:10.362 --rc geninfo_all_blocks=1 00:08:10.362 --rc geninfo_unexecuted_blocks=1 00:08:10.362 00:08:10.362 ' 00:08:10.362 11:20:55 version -- app/version.sh@17 -- # get_header_version major 00:08:10.362 11:20:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:10.362 11:20:55 version -- app/version.sh@14 -- # cut -f2 00:08:10.362 11:20:55 version -- app/version.sh@14 -- # tr -d '"' 00:08:10.362 11:20:55 version -- app/version.sh@17 -- # major=25 00:08:10.362 11:20:55 version -- app/version.sh@18 -- # get_header_version minor 00:08:10.362 11:20:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:10.362 11:20:55 version -- app/version.sh@14 -- # cut -f2 00:08:10.362 11:20:55 version -- app/version.sh@14 -- # tr -d '"' 00:08:10.362 11:20:55 version -- app/version.sh@18 -- # minor=1 00:08:10.362 11:20:55 version -- app/version.sh@19 -- # get_header_version patch 00:08:10.362 11:20:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:10.362 11:20:55 version -- app/version.sh@14 -- # cut -f2 00:08:10.362 11:20:55 version -- app/version.sh@14 -- # tr -d '"' 00:08:10.362 11:20:55 version -- app/version.sh@19 -- # patch=0 00:08:10.362 11:20:55 version -- app/version.sh@20 -- # get_header_version suffix 00:08:10.362 11:20:55 version -- app/version.sh@14 -- # cut -f2 00:08:10.362 11:20:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:10.362 11:20:55 version -- app/version.sh@14 -- # tr -d '"' 00:08:10.362 11:20:55 version -- app/version.sh@20 -- # suffix=-pre 00:08:10.362 11:20:55 version -- app/version.sh@22 -- # version=25.1 00:08:10.362 11:20:55 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:10.362 11:20:55 version -- app/version.sh@28 -- # version=25.1rc0 00:08:10.362 11:20:55 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:10.362 11:20:55 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:10.622 11:20:55 version -- app/version.sh@30 -- # py_version=25.1rc0 00:08:10.622 11:20:55 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:08:10.622 ************************************ 00:08:10.622 END TEST version 00:08:10.622 ************************************ 00:08:10.622 00:08:10.622 real 0m0.274s 00:08:10.622 user 0m0.204s 00:08:10.622 sys 0m0.099s 00:08:10.622 11:20:55 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:10.622 11:20:55 version -- common/autotest_common.sh@10 -- # set +x 00:08:10.622 11:20:56 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:08:10.622 11:20:56 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:08:10.622 11:20:56 -- spdk/autotest.sh@194 -- # uname -s 00:08:10.622 11:20:56 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:08:10.622 11:20:56 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:08:10.622 11:20:56 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:08:10.622 11:20:56 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:08:10.622 11:20:56 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:08:10.622 11:20:56 -- spdk/autotest.sh@260 -- # timing_exit lib 00:08:10.622 11:20:56 -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:10.622 11:20:56 -- common/autotest_common.sh@10 -- # set +x 00:08:10.622 11:20:56 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:08:10.622 11:20:56 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:08:10.622 11:20:56 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:08:10.622 11:20:56 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:08:10.622 11:20:56 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:08:10.622 11:20:56 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:08:10.622 11:20:56 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:10.622 11:20:56 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:10.622 11:20:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:10.622 11:20:56 -- common/autotest_common.sh@10 -- # set +x 00:08:10.622 ************************************ 00:08:10.622 START TEST nvmf_tcp 00:08:10.622 ************************************ 00:08:10.622 11:20:56 nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:10.622 * Looking for test storage... 00:08:10.622 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:08:10.622 11:20:56 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:10.622 11:20:56 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:08:10.622 11:20:56 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:10.887 11:20:56 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:10.887 11:20:56 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:10.887 11:20:56 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:10.887 11:20:56 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:10.887 11:20:56 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:08:10.887 11:20:56 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:08:10.887 11:20:56 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:08:10.887 11:20:56 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:08:10.887 11:20:56 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:08:10.887 11:20:56 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:08:10.887 11:20:56 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:08:10.887 11:20:56 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:10.887 11:20:56 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:08:10.887 11:20:56 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:08:10.887 11:20:56 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:10.887 11:20:56 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:10.887 11:20:56 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:08:10.887 11:20:56 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:08:10.887 11:20:56 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:10.887 11:20:56 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:08:10.887 11:20:56 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:08:10.887 11:20:56 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:08:10.887 11:20:56 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:08:10.887 11:20:56 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:10.887 11:20:56 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:08:10.887 11:20:56 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:08:10.887 11:20:56 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:10.887 11:20:56 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:10.887 11:20:56 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:08:10.887 11:20:56 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:10.887 11:20:56 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:10.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.887 --rc genhtml_branch_coverage=1 00:08:10.887 --rc genhtml_function_coverage=1 00:08:10.887 --rc genhtml_legend=1 00:08:10.887 --rc geninfo_all_blocks=1 00:08:10.887 --rc geninfo_unexecuted_blocks=1 00:08:10.887 00:08:10.887 ' 00:08:10.887 11:20:56 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:10.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.887 --rc genhtml_branch_coverage=1 00:08:10.887 --rc genhtml_function_coverage=1 00:08:10.887 --rc genhtml_legend=1 00:08:10.887 --rc geninfo_all_blocks=1 00:08:10.887 --rc geninfo_unexecuted_blocks=1 00:08:10.887 00:08:10.887 ' 00:08:10.887 11:20:56 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:10.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.887 --rc genhtml_branch_coverage=1 00:08:10.887 --rc genhtml_function_coverage=1 00:08:10.887 --rc genhtml_legend=1 00:08:10.887 --rc geninfo_all_blocks=1 00:08:10.887 --rc geninfo_unexecuted_blocks=1 00:08:10.887 00:08:10.887 ' 00:08:10.887 11:20:56 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:10.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.887 --rc genhtml_branch_coverage=1 00:08:10.887 --rc genhtml_function_coverage=1 00:08:10.887 --rc genhtml_legend=1 00:08:10.887 --rc geninfo_all_blocks=1 00:08:10.887 --rc geninfo_unexecuted_blocks=1 00:08:10.887 00:08:10.887 ' 00:08:10.887 11:20:56 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:08:10.887 11:20:56 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:10.887 11:20:56 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:08:10.887 11:20:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:10.887 11:20:56 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:10.887 11:20:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:10.887 ************************************ 00:08:10.887 START TEST nvmf_target_core 00:08:10.887 ************************************ 00:08:10.887 11:20:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:08:10.887 * Looking for test storage... 00:08:10.887 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:08:10.887 11:20:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:10.887 11:20:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:08:10.887 11:20:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:11.147 11:20:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:11.147 11:20:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:11.147 11:20:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:11.147 11:20:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:11.147 11:20:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:08:11.147 11:20:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:08:11.147 11:20:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:08:11.147 11:20:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:08:11.147 11:20:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:08:11.147 11:20:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:08:11.147 11:20:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:08:11.147 11:20:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:11.147 11:20:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:08:11.147 11:20:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:08:11.147 11:20:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:11.147 11:20:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:11.147 11:20:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:08:11.147 11:20:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:08:11.147 11:20:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:11.147 11:20:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:08:11.147 11:20:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:08:11.147 11:20:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:08:11.147 11:20:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:08:11.147 11:20:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:11.147 11:20:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:08:11.147 11:20:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:08:11.147 11:20:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:11.147 11:20:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:11.147 11:20:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:08:11.147 11:20:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:11.147 11:20:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:11.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.147 --rc genhtml_branch_coverage=1 00:08:11.147 --rc genhtml_function_coverage=1 00:08:11.147 --rc genhtml_legend=1 00:08:11.147 --rc geninfo_all_blocks=1 00:08:11.147 --rc geninfo_unexecuted_blocks=1 00:08:11.147 00:08:11.147 ' 00:08:11.147 11:20:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:11.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.147 --rc genhtml_branch_coverage=1 00:08:11.147 --rc genhtml_function_coverage=1 00:08:11.147 --rc genhtml_legend=1 00:08:11.147 --rc geninfo_all_blocks=1 00:08:11.147 --rc geninfo_unexecuted_blocks=1 00:08:11.147 00:08:11.147 ' 00:08:11.147 11:20:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:11.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.147 --rc genhtml_branch_coverage=1 00:08:11.147 --rc genhtml_function_coverage=1 00:08:11.147 --rc genhtml_legend=1 00:08:11.147 --rc geninfo_all_blocks=1 00:08:11.147 --rc geninfo_unexecuted_blocks=1 00:08:11.147 00:08:11.147 ' 00:08:11.147 11:20:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:11.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.147 --rc genhtml_branch_coverage=1 00:08:11.147 --rc genhtml_function_coverage=1 00:08:11.147 --rc genhtml_legend=1 00:08:11.147 --rc geninfo_all_blocks=1 00:08:11.147 --rc geninfo_unexecuted_blocks=1 00:08:11.147 00:08:11.147 ' 00:08:11.147 11:20:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:08:11.147 11:20:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:11.147 11:20:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:11.147 11:20:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:08:11.147 11:20:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:11.147 11:20:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:11.147 11:20:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:11.147 11:20:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:11.147 11:20:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:11.147 11:20:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:11.147 11:20:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:11.147 11:20:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:11.147 11:20:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:11.147 11:20:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:11.147 11:20:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:08:11.147 11:20:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=806d442c-08ba-4719-b76d-33169283459a 00:08:11.147 11:20:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:11.147 11:20:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:11.147 11:20:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:11.147 11:20:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:11.147 11:20:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:11.147 11:20:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:08:11.147 11:20:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:11.147 11:20:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:11.147 11:20:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:11.147 11:20:56 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.147 11:20:56 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.147 11:20:56 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.147 11:20:56 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:08:11.147 11:20:56 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.147 11:20:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:08:11.147 11:20:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:11.147 11:20:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:11.147 11:20:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:11.147 11:20:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:11.147 11:20:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:11.147 11:20:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:11.147 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:11.147 11:20:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:11.147 11:20:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:11.147 11:20:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:11.147 11:20:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:11.147 11:20:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:08:11.147 11:20:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:08:11.147 11:20:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:11.147 11:20:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:11.147 11:20:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:11.147 11:20:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:11.147 ************************************ 00:08:11.148 START TEST nvmf_abort 00:08:11.148 ************************************ 00:08:11.148 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:11.148 * Looking for test storage... 00:08:11.148 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:11.148 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:11.148 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:08:11.148 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:11.148 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:11.148 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:11.148 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:11.148 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:11.148 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:08:11.148 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:08:11.148 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:08:11.148 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:08:11.148 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:08:11.148 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:08:11.148 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:08:11.148 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:11.148 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:08:11.148 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:08:11.148 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:11.148 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:11.148 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:08:11.148 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:08:11.148 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:11.148 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:08:11.148 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:08:11.148 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:08:11.148 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:08:11.148 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:11.148 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:08:11.148 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:08:11.148 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:11.148 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:11.148 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:08:11.148 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:11.148 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:11.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.148 --rc genhtml_branch_coverage=1 00:08:11.148 --rc genhtml_function_coverage=1 00:08:11.148 --rc genhtml_legend=1 00:08:11.148 --rc geninfo_all_blocks=1 00:08:11.148 --rc geninfo_unexecuted_blocks=1 00:08:11.148 00:08:11.148 ' 00:08:11.148 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:11.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.148 --rc genhtml_branch_coverage=1 00:08:11.148 --rc genhtml_function_coverage=1 00:08:11.148 --rc genhtml_legend=1 00:08:11.148 --rc geninfo_all_blocks=1 00:08:11.148 --rc geninfo_unexecuted_blocks=1 00:08:11.148 00:08:11.148 ' 00:08:11.148 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:11.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.148 --rc genhtml_branch_coverage=1 00:08:11.148 --rc genhtml_function_coverage=1 00:08:11.148 --rc genhtml_legend=1 00:08:11.148 --rc geninfo_all_blocks=1 00:08:11.148 --rc geninfo_unexecuted_blocks=1 00:08:11.148 00:08:11.148 ' 00:08:11.148 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:11.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.148 --rc genhtml_branch_coverage=1 00:08:11.148 --rc genhtml_function_coverage=1 00:08:11.148 --rc genhtml_legend=1 00:08:11.148 --rc geninfo_all_blocks=1 00:08:11.148 --rc geninfo_unexecuted_blocks=1 00:08:11.148 00:08:11.148 ' 00:08:11.148 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:11.148 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:08:11.148 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:11.148 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:11.148 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:11.148 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:11.148 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:11.148 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:11.148 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:11.148 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:11.148 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:11.148 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:11.148 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:08:11.148 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=806d442c-08ba-4719-b76d-33169283459a 00:08:11.148 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:11.148 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:11.148 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:11.148 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:11.148 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:11.148 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:08:11.148 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:11.148 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:11.148 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:11.148 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.148 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.148 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.148 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:08:11.148 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.148 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:08:11.148 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:11.148 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:11.148 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:11.148 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:11.148 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:11.148 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:11.148 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:11.148 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:11.148 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:11.148 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:11.148 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:11.148 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:08:11.149 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:08:11.149 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:11.149 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:11.149 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:11.149 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:11.149 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:11.149 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:11.149 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:11.149 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:11.149 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:11.149 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:11.149 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:11.149 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:11.149 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:11.149 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:11.149 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:11.407 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:11.407 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:11.407 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:11.407 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:11.407 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:11.407 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:11.407 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:11.407 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:11.407 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:11.407 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:11.407 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:11.407 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:11.407 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:11.407 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:11.407 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:11.407 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:11.407 Cannot find device "nvmf_init_br" 00:08:11.407 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@162 -- # true 00:08:11.407 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:11.407 Cannot find device "nvmf_init_br2" 00:08:11.407 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@163 -- # true 00:08:11.407 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:11.407 Cannot find device "nvmf_tgt_br" 00:08:11.407 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@164 -- # true 00:08:11.407 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:11.407 Cannot find device "nvmf_tgt_br2" 00:08:11.407 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@165 -- # true 00:08:11.407 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:11.407 Cannot find device "nvmf_init_br" 00:08:11.408 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@166 -- # true 00:08:11.408 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:11.408 Cannot find device "nvmf_init_br2" 00:08:11.408 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@167 -- # true 00:08:11.408 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:11.408 Cannot find device "nvmf_tgt_br" 00:08:11.408 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@168 -- # true 00:08:11.408 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:11.408 Cannot find device "nvmf_tgt_br2" 00:08:11.408 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@169 -- # true 00:08:11.408 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:11.408 Cannot find device "nvmf_br" 00:08:11.408 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@170 -- # true 00:08:11.408 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:11.408 Cannot find device "nvmf_init_if" 00:08:11.408 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@171 -- # true 00:08:11.408 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:11.408 Cannot find device "nvmf_init_if2" 00:08:11.408 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@172 -- # true 00:08:11.408 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:11.408 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:11.408 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@173 -- # true 00:08:11.408 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:11.408 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:11.408 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@174 -- # true 00:08:11.408 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:11.408 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:11.408 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:11.408 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:11.408 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:11.408 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:11.408 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:11.408 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:11.408 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:11.408 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:11.408 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:11.408 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:11.408 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:11.408 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:11.408 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:11.408 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:11.408 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:11.408 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:11.667 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:11.667 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:11.667 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:11.667 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:11.667 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:11.667 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:11.667 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:11.667 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:11.667 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:11.667 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:11.667 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:11.667 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:11.667 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:11.667 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:11.667 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:11.667 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:11.667 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.150 ms 00:08:11.667 00:08:11.667 --- 10.0.0.3 ping statistics --- 00:08:11.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:11.667 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:08:11.667 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:11.667 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:11.667 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.092 ms 00:08:11.667 00:08:11.667 --- 10.0.0.4 ping statistics --- 00:08:11.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:11.667 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:08:11.667 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:11.667 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:11.667 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.062 ms 00:08:11.667 00:08:11.667 --- 10.0.0.1 ping statistics --- 00:08:11.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:11.667 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:08:11.667 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:11.667 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:11.667 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.124 ms 00:08:11.667 00:08:11.667 --- 10.0.0.2 ping statistics --- 00:08:11.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:11.667 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:08:11.667 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:11.667 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@461 -- # return 0 00:08:11.667 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:11.667 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:11.667 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:11.667 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:11.667 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:11.667 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:11.667 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:11.667 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:08:11.667 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:11.667 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:11.667 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:11.667 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=62007 00:08:11.667 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:11.667 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 62007 00:08:11.668 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 62007 ']' 00:08:11.668 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:11.668 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:11.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:11.668 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:11.668 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:11.668 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:11.926 [2024-11-20 11:20:57.319549] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:08:11.926 [2024-11-20 11:20:57.319713] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:11.926 [2024-11-20 11:20:57.473499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:12.184 [2024-11-20 11:20:57.526067] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:12.184 [2024-11-20 11:20:57.526475] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:12.184 [2024-11-20 11:20:57.526804] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:12.184 [2024-11-20 11:20:57.527092] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:12.184 [2024-11-20 11:20:57.527230] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:12.184 [2024-11-20 11:20:57.528278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:12.184 [2024-11-20 11:20:57.528465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:12.184 [2024-11-20 11:20:57.528484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:12.184 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:12.184 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:08:12.184 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:12.184 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:12.184 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:12.184 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:12.184 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:08:12.184 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.184 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:12.184 [2024-11-20 11:20:57.665744] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:12.184 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.184 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:08:12.184 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.184 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:12.184 Malloc0 00:08:12.184 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.184 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:12.184 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.184 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:12.184 Delay0 00:08:12.184 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.184 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:12.184 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.184 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:12.184 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.184 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:08:12.184 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.184 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:12.184 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.184 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:08:12.184 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.184 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:12.184 [2024-11-20 11:20:57.735872] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:12.184 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.184 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:12.184 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.184 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:12.184 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.185 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:08:12.443 [2024-11-20 11:20:57.931040] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:14.982 Initializing NVMe Controllers 00:08:14.982 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:08:14.982 controller IO queue size 128 less than required 00:08:14.982 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:08:14.982 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:08:14.982 Initialization complete. Launching workers. 00:08:14.982 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 24082 00:08:14.982 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 24143, failed to submit 62 00:08:14.982 success 24086, unsuccessful 57, failed 0 00:08:14.982 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:14.982 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.982 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:14.982 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.982 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:08:14.982 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:08:14.982 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:14.982 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:08:14.982 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:14.982 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:08:14.982 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:14.982 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:14.982 rmmod nvme_tcp 00:08:14.982 rmmod nvme_fabrics 00:08:14.982 rmmod nvme_keyring 00:08:14.982 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:14.982 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:08:14.982 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:08:14.982 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 62007 ']' 00:08:14.982 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 62007 00:08:14.983 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 62007 ']' 00:08:14.983 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 62007 00:08:14.983 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:08:14.983 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:14.983 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62007 00:08:14.983 killing process with pid 62007 00:08:14.983 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:14.983 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:14.983 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62007' 00:08:14.983 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 62007 00:08:14.983 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 62007 00:08:14.983 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:14.983 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:14.983 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:14.983 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:08:14.983 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:08:14.983 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:14.983 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:08:14.983 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:14.983 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:14.983 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:14.983 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:14.983 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:14.983 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:14.983 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:14.983 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:14.983 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:14.983 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:14.983 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:14.983 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:14.983 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:14.983 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:14.983 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:14.983 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:14.983 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:14.983 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:14.983 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:14.983 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@300 -- # return 0 00:08:14.983 00:08:14.983 real 0m4.017s 00:08:14.983 user 0m10.086s 00:08:14.983 sys 0m1.106s 00:08:14.983 ************************************ 00:08:14.983 END TEST nvmf_abort 00:08:14.983 ************************************ 00:08:14.983 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:14.983 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:14.983 11:21:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:14.983 11:21:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:14.983 11:21:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:14.983 11:21:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:15.242 ************************************ 00:08:15.242 START TEST nvmf_ns_hotplug_stress 00:08:15.242 ************************************ 00:08:15.242 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:15.242 * Looking for test storage... 00:08:15.242 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:15.242 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:15.242 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:08:15.242 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:15.242 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:15.242 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:15.242 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:15.242 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:15.242 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:08:15.242 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:08:15.242 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:08:15.242 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:08:15.242 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:08:15.242 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:08:15.242 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:08:15.242 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:15.242 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:08:15.242 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:08:15.242 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:15.242 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:15.242 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:08:15.242 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:08:15.243 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:15.243 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:08:15.243 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:08:15.243 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:08:15.243 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:08:15.243 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:15.243 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:08:15.243 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:08:15.243 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:15.243 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:15.243 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:08:15.243 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:15.243 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:15.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.243 --rc genhtml_branch_coverage=1 00:08:15.243 --rc genhtml_function_coverage=1 00:08:15.243 --rc genhtml_legend=1 00:08:15.243 --rc geninfo_all_blocks=1 00:08:15.243 --rc geninfo_unexecuted_blocks=1 00:08:15.243 00:08:15.243 ' 00:08:15.243 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:15.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.243 --rc genhtml_branch_coverage=1 00:08:15.243 --rc genhtml_function_coverage=1 00:08:15.243 --rc genhtml_legend=1 00:08:15.243 --rc geninfo_all_blocks=1 00:08:15.243 --rc geninfo_unexecuted_blocks=1 00:08:15.243 00:08:15.243 ' 00:08:15.243 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:15.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.243 --rc genhtml_branch_coverage=1 00:08:15.243 --rc genhtml_function_coverage=1 00:08:15.243 --rc genhtml_legend=1 00:08:15.243 --rc geninfo_all_blocks=1 00:08:15.243 --rc geninfo_unexecuted_blocks=1 00:08:15.243 00:08:15.243 ' 00:08:15.243 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:15.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.243 --rc genhtml_branch_coverage=1 00:08:15.243 --rc genhtml_function_coverage=1 00:08:15.243 --rc genhtml_legend=1 00:08:15.243 --rc geninfo_all_blocks=1 00:08:15.243 --rc geninfo_unexecuted_blocks=1 00:08:15.243 00:08:15.243 ' 00:08:15.243 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:15.243 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:08:15.243 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:15.243 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:15.243 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:15.243 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:15.243 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:15.243 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:15.243 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:15.243 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:15.243 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:15.243 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:15.243 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:08:15.243 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=806d442c-08ba-4719-b76d-33169283459a 00:08:15.243 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:15.243 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:15.243 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:15.243 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:15.243 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:15.243 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:08:15.243 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:15.243 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:15.243 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:15.243 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.243 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.243 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.243 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:08:15.243 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.243 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:08:15.243 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:15.243 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:15.243 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:15.243 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:15.243 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:15.243 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:15.243 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:15.243 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:15.243 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:15.243 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:15.243 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:15.243 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:08:15.243 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:15.243 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:15.243 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:15.243 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:15.243 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:15.243 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:15.243 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:15.243 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:15.243 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:15.244 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:15.244 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:15.244 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:15.244 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:15.244 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:15.244 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:15.244 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:15.244 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:15.244 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:15.244 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:15.244 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:15.244 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:15.244 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:15.244 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:15.244 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:15.244 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:15.244 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:15.244 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:15.244 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:15.244 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:15.244 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:15.244 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:15.244 Cannot find device "nvmf_init_br" 00:08:15.244 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # true 00:08:15.244 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:15.502 Cannot find device "nvmf_init_br2" 00:08:15.502 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # true 00:08:15.502 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:15.502 Cannot find device "nvmf_tgt_br" 00:08:15.502 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # true 00:08:15.502 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:15.502 Cannot find device "nvmf_tgt_br2" 00:08:15.502 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # true 00:08:15.502 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:15.502 Cannot find device "nvmf_init_br" 00:08:15.502 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # true 00:08:15.502 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:15.502 Cannot find device "nvmf_init_br2" 00:08:15.502 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # true 00:08:15.502 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:15.502 Cannot find device "nvmf_tgt_br" 00:08:15.502 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # true 00:08:15.502 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:15.502 Cannot find device "nvmf_tgt_br2" 00:08:15.502 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # true 00:08:15.502 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:15.502 Cannot find device "nvmf_br" 00:08:15.502 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # true 00:08:15.502 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:15.502 Cannot find device "nvmf_init_if" 00:08:15.502 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # true 00:08:15.502 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:15.502 Cannot find device "nvmf_init_if2" 00:08:15.502 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # true 00:08:15.502 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:15.502 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:15.502 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # true 00:08:15.502 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:15.502 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:15.502 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # true 00:08:15.502 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:15.502 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:15.502 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:15.502 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:15.502 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:15.502 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:15.502 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:15.502 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:15.502 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:15.503 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:15.503 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:15.503 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:15.503 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:15.503 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:15.503 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:15.503 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:15.503 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:15.762 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:15.762 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:15.762 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:15.762 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:15.762 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:15.762 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:15.762 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:15.762 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:15.762 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:15.762 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:15.762 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:15.762 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:15.762 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:15.762 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:15.762 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:15.762 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:15.762 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:15.762 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:08:15.762 00:08:15.762 --- 10.0.0.3 ping statistics --- 00:08:15.762 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.762 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:08:15.762 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:15.762 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:15.762 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.110 ms 00:08:15.762 00:08:15.762 --- 10.0.0.4 ping statistics --- 00:08:15.762 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.762 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:08:15.762 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:15.762 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:15.762 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:08:15.762 00:08:15.762 --- 10.0.0.1 ping statistics --- 00:08:15.762 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.762 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:08:15.762 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:15.762 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:15.762 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:08:15.762 00:08:15.762 --- 10.0.0.2 ping statistics --- 00:08:15.762 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.762 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:08:15.762 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:15.762 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@461 -- # return 0 00:08:15.762 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:15.762 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:15.762 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:15.762 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:15.762 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:15.762 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:15.762 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:15.762 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:08:15.762 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:15.762 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:15.762 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:15.762 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=62285 00:08:15.762 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:15.762 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 62285 00:08:15.762 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 62285 ']' 00:08:15.762 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:15.762 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:15.762 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:15.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:15.762 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:15.762 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:15.762 [2024-11-20 11:21:01.323081] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:08:15.762 [2024-11-20 11:21:01.323227] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:16.022 [2024-11-20 11:21:01.484610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:16.022 [2024-11-20 11:21:01.526055] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:16.022 [2024-11-20 11:21:01.526299] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:16.022 [2024-11-20 11:21:01.526379] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:16.022 [2024-11-20 11:21:01.526472] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:16.022 [2024-11-20 11:21:01.526552] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:16.022 [2024-11-20 11:21:01.527574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:16.022 [2024-11-20 11:21:01.527691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:16.022 [2024-11-20 11:21:01.527702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:16.993 11:21:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:16.993 11:21:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:08:16.993 11:21:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:16.993 11:21:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:16.993 11:21:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:16.993 11:21:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:16.993 11:21:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:08:16.993 11:21:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:17.268 [2024-11-20 11:21:02.663487] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:17.268 11:21:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:17.526 11:21:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:17.784 [2024-11-20 11:21:03.224125] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:17.784 11:21:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:18.043 11:21:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:08:18.301 Malloc0 00:08:18.301 11:21:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:18.559 Delay0 00:08:18.559 11:21:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:18.817 11:21:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:08:19.075 NULL1 00:08:19.333 11:21:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:19.591 11:21:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=62429 00:08:19.591 11:21:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:08:19.591 11:21:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62429 00:08:19.591 11:21:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:20.965 Read completed with error (sct=0, sc=11) 00:08:20.965 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:20.965 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:20.965 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:20.965 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:20.965 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:20.965 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:20.965 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:20.965 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:21.222 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:21.222 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:08:21.222 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:08:21.481 true 00:08:21.481 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62429 00:08:21.481 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:22.415 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:22.415 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:22.415 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:22.415 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:22.415 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:22.415 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:22.415 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:22.415 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:22.415 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:22.671 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:08:22.671 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:08:22.928 true 00:08:22.928 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62429 00:08:22.928 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:23.492 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:23.492 11:21:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:23.492 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:23.749 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:23.749 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:23.749 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:24.007 11:21:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:08:24.007 11:21:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:08:24.571 true 00:08:24.571 11:21:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62429 00:08:24.571 11:21:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:25.947 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:25.947 11:21:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:25.947 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:25.947 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:25.947 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:25.947 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:26.205 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:26.205 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:26.205 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:26.205 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:26.205 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:26.465 11:21:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:08:26.465 11:21:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:08:27.031 true 00:08:27.031 11:21:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62429 00:08:27.031 11:21:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:27.294 11:21:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:27.294 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:27.626 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:27.626 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:27.626 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:27.626 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:27.885 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:27.885 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:08:27.885 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:08:28.453 true 00:08:28.453 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62429 00:08:28.453 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:29.021 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:29.588 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:08:29.588 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:08:29.847 true 00:08:29.847 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62429 00:08:29.847 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:31.222 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:31.222 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:31.222 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:31.222 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:31.222 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:31.222 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:31.481 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:31.481 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:31.481 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:08:31.481 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:08:32.047 true 00:08:32.047 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62429 00:08:32.047 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:32.641 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:32.641 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:32.641 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:32.641 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:32.641 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:32.899 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:32.899 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:32.899 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:32.899 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:33.156 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:08:33.156 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:08:33.414 true 00:08:33.414 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62429 00:08:33.414 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:34.347 11:21:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:34.914 11:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:08:34.914 11:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:08:35.172 true 00:08:35.172 11:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62429 00:08:35.172 11:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:36.547 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:36.547 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:36.547 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:36.547 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:36.547 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:36.547 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:36.547 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:36.547 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:36.873 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:36.873 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:36.873 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:08:36.873 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:08:37.131 true 00:08:37.131 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62429 00:08:37.131 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:37.715 11:21:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:37.715 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:37.715 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:37.973 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:37.973 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:37.973 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:37.973 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:37.973 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:37.973 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:38.231 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:38.231 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:38.231 11:21:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:08:38.231 11:21:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:08:38.490 true 00:08:38.490 11:21:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62429 00:08:38.490 11:21:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:39.423 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:39.423 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:39.423 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:39.423 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:39.423 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:08:39.423 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:08:39.989 true 00:08:39.989 11:21:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62429 00:08:39.989 11:21:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:40.248 11:21:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:40.814 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:08:40.814 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:08:41.072 true 00:08:41.072 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62429 00:08:41.072 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:41.330 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:41.589 11:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:08:41.589 11:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:08:42.180 true 00:08:42.180 11:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62429 00:08:42.180 11:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:43.555 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:43.555 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:43.555 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:43.555 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:43.555 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:43.555 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:43.555 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:43.555 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:43.555 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:43.812 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:08:43.812 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:08:44.069 true 00:08:44.069 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62429 00:08:44.069 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:44.635 11:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:44.635 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:44.635 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:44.892 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:44.892 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:44.892 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:44.892 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:44.892 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:44.892 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:45.150 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:45.150 11:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:08:45.150 11:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:08:45.408 true 00:08:45.408 11:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62429 00:08:45.408 11:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:46.344 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:46.344 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:46.344 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:46.344 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:46.344 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:46.344 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:46.344 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:46.603 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:08:46.603 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:08:46.862 true 00:08:46.862 11:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62429 00:08:46.862 11:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:48.312 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:48.312 11:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:48.312 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:48.570 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:48.570 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:48.570 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:48.570 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:48.570 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:48.570 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:48.828 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:48.828 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:48.828 11:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:08:48.828 11:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:08:49.086 true 00:08:49.086 11:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62429 00:08:49.086 11:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:49.654 Initializing NVMe Controllers 00:08:49.654 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:08:49.654 Controller IO queue size 128, less than required. 00:08:49.654 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:49.654 Controller IO queue size 128, less than required. 00:08:49.654 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:49.654 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:49.654 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:49.654 Initialization complete. Launching workers. 00:08:49.654 ======================================================== 00:08:49.654 Latency(us) 00:08:49.654 Device Information : IOPS MiB/s Average min max 00:08:49.654 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3647.73 1.78 24611.19 2843.19 1353751.80 00:08:49.654 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 11193.50 5.47 11435.16 3687.44 1241884.37 00:08:49.654 ======================================================== 00:08:49.654 Total : 14841.23 7.25 14673.61 2843.19 1353751.80 00:08:49.654 00:08:49.912 11:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:50.171 11:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:08:50.171 11:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:08:50.430 true 00:08:50.430 11:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62429 00:08:50.430 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (62429) - No such process 00:08:50.430 11:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 62429 00:08:50.430 11:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:50.689 11:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:50.947 11:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:50.947 11:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:50.947 11:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:50.947 11:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:50.947 11:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:51.207 null0 00:08:51.465 11:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:51.465 11:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:51.465 11:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:51.724 null1 00:08:51.724 11:21:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:51.724 11:21:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:51.724 11:21:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:51.982 null2 00:08:51.982 11:21:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:51.982 11:21:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:51.982 11:21:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:52.240 null3 00:08:52.240 11:21:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:52.240 11:21:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:52.240 11:21:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:52.498 null4 00:08:52.498 11:21:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:52.498 11:21:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:52.498 11:21:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:52.756 null5 00:08:52.756 11:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:52.756 11:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:52.756 11:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:53.014 null6 00:08:53.014 11:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:53.014 11:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:53.014 11:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:53.274 null7 00:08:53.274 11:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:53.274 11:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:53.274 11:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:53.274 11:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:53.274 11:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:53.274 11:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:53.274 11:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:53.274 11:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:53.274 11:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:53.274 11:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:53.274 11:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:53.274 11:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:53.274 11:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:53.274 11:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:53.274 11:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:53.274 11:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:53.274 11:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:53.274 11:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:53.274 11:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:53.274 11:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:53.274 11:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:53.274 11:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:53.274 11:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:53.274 11:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:53.274 11:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:53.274 11:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:53.274 11:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:53.274 11:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:53.274 11:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:53.274 11:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:53.274 11:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:53.274 11:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:53.274 11:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:53.274 11:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:53.274 11:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:53.274 11:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:53.274 11:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:53.274 11:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:53.274 11:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:53.274 11:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:53.274 11:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:53.274 11:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:53.274 11:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:53.274 11:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:53.274 11:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:53.274 11:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:53.274 11:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:53.274 11:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:53.274 11:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:53.274 11:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:53.274 11:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:53.274 11:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:53.274 11:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:53.274 11:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:53.274 11:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:53.274 11:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:53.274 11:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:53.274 11:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:53.274 11:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 63260 63262 63263 63266 63267 63269 63271 63273 00:08:53.274 11:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:53.274 11:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:53.274 11:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:53.274 11:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:53.274 11:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:53.274 11:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:53.274 11:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:53.274 11:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:53.274 11:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:53.274 11:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:53.533 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:53.533 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:53.794 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:53.794 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:53.794 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:53.794 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:53.794 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:53.794 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:53.794 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:53.794 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:53.794 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:54.052 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:54.052 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.052 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:54.052 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:54.052 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.052 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:54.052 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:54.052 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.052 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:54.052 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:54.052 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.052 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:54.052 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:54.052 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.052 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:54.052 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:54.052 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:54.052 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.052 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.052 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:54.052 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:54.310 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:54.310 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:54.310 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:54.310 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:54.310 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:54.310 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:54.310 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:54.568 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:54.568 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:54.568 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.568 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:54.568 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:54.568 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.568 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:54.568 11:21:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:54.568 11:21:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.568 11:21:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:54.568 11:21:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:54.568 11:21:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.568 11:21:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:54.826 11:21:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:54.826 11:21:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.826 11:21:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:54.826 11:21:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:54.826 11:21:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.826 11:21:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:54.826 11:21:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:54.826 11:21:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.826 11:21:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:54.826 11:21:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:54.826 11:21:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.826 11:21:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:54.826 11:21:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:54.826 11:21:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:54.826 11:21:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:55.084 11:21:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:55.084 11:21:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:55.084 11:21:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:55.084 11:21:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:55.084 11:21:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:55.084 11:21:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:55.084 11:21:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:55.084 11:21:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:55.084 11:21:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:55.084 11:21:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:55.084 11:21:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:55.342 11:21:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:55.342 11:21:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:55.342 11:21:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:55.342 11:21:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:55.342 11:21:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:55.342 11:21:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:55.342 11:21:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:55.342 11:21:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:55.342 11:21:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:55.342 11:21:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:55.342 11:21:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:55.342 11:21:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:55.342 11:21:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:55.342 11:21:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:55.342 11:21:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:55.342 11:21:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:55.342 11:21:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:55.342 11:21:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:55.342 11:21:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:55.601 11:21:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:55.601 11:21:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:55.601 11:21:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:55.601 11:21:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:55.601 11:21:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:55.601 11:21:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:55.601 11:21:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:55.601 11:21:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:55.859 11:21:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:55.859 11:21:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:55.859 11:21:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:55.859 11:21:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:55.859 11:21:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:55.859 11:21:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:55.859 11:21:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:55.859 11:21:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:55.859 11:21:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:55.859 11:21:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:55.859 11:21:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:55.859 11:21:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:55.859 11:21:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:55.859 11:21:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:55.859 11:21:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:55.859 11:21:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:55.859 11:21:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:56.118 11:21:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:56.118 11:21:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:56.118 11:21:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:56.118 11:21:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:56.118 11:21:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:56.118 11:21:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:56.118 11:21:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:56.118 11:21:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:56.118 11:21:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:56.118 11:21:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:56.376 11:21:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:56.376 11:21:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:56.376 11:21:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:56.376 11:21:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:56.376 11:21:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:56.376 11:21:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:56.376 11:21:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:56.376 11:21:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:56.376 11:21:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:56.376 11:21:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:56.635 11:21:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:56.635 11:21:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:56.635 11:21:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:56.635 11:21:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:56.635 11:21:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:56.635 11:21:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:56.635 11:21:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:56.635 11:21:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:56.635 11:21:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:56.635 11:21:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:56.635 11:21:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:56.635 11:21:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:56.635 11:21:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:56.635 11:21:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:56.635 11:21:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:56.635 11:21:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:56.893 11:21:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:56.893 11:21:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:56.893 11:21:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:56.893 11:21:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:56.893 11:21:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:56.893 11:21:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:56.893 11:21:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:56.893 11:21:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:56.893 11:21:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:56.893 11:21:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:57.152 11:21:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:57.152 11:21:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:57.152 11:21:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:57.152 11:21:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:57.152 11:21:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:57.152 11:21:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:57.152 11:21:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:57.152 11:21:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:57.152 11:21:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:57.152 11:21:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:57.152 11:21:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:57.152 11:21:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:57.152 11:21:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:57.152 11:21:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:57.152 11:21:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:57.411 11:21:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:57.411 11:21:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:57.411 11:21:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:57.411 11:21:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:57.411 11:21:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:57.411 11:21:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:57.411 11:21:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:57.411 11:21:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:57.411 11:21:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:57.411 11:21:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:57.411 11:21:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:57.411 11:21:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:57.669 11:21:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:57.669 11:21:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:57.669 11:21:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:57.669 11:21:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:57.669 11:21:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:57.669 11:21:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:57.669 11:21:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:57.669 11:21:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:57.669 11:21:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:57.669 11:21:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:57.669 11:21:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:57.669 11:21:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:57.928 11:21:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:57.928 11:21:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:57.928 11:21:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:57.928 11:21:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:57.928 11:21:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:57.928 11:21:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:57.928 11:21:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:57.928 11:21:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:57.928 11:21:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:57.928 11:21:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:57.928 11:21:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:57.928 11:21:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:57.928 11:21:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:58.186 11:21:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:58.186 11:21:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:58.186 11:21:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:58.186 11:21:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:58.186 11:21:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:58.186 11:21:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:58.186 11:21:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:58.186 11:21:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:58.186 11:21:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:58.187 11:21:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:58.187 11:21:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:58.187 11:21:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:58.187 11:21:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:58.445 11:21:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:58.445 11:21:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:58.445 11:21:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:58.445 11:21:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:58.445 11:21:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:58.445 11:21:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:58.445 11:21:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:58.445 11:21:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:58.445 11:21:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:58.445 11:21:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:58.445 11:21:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:58.445 11:21:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:58.445 11:21:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:58.445 11:21:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:58.445 11:21:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:58.704 11:21:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:58.704 11:21:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:58.704 11:21:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:58.704 11:21:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:58.704 11:21:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:58.704 11:21:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:58.704 11:21:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:58.704 11:21:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:58.704 11:21:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:58.704 11:21:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:58.704 11:21:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:58.704 11:21:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:58.704 11:21:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:58.962 11:21:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:58.962 11:21:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:58.962 11:21:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:58.962 11:21:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:58.962 11:21:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:58.962 11:21:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:58.962 11:21:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:58.962 11:21:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:58.962 11:21:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:58.962 11:21:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:59.220 11:21:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:59.220 11:21:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:59.220 11:21:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:59.220 11:21:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:59.220 11:21:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:59.220 11:21:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:59.220 11:21:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:59.220 11:21:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:59.220 11:21:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:59.220 11:21:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:59.220 11:21:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:59.220 11:21:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:59.220 11:21:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:59.478 11:21:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:59.478 11:21:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:59.478 11:21:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:59.478 11:21:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:59.478 11:21:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:59.478 11:21:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:59.478 11:21:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:59.478 11:21:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:59.478 11:21:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:59.478 11:21:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:59.478 11:21:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:59.478 11:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:59.737 11:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:59.737 11:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:59.737 11:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:59.737 11:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:59.737 11:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:59.737 11:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:59.737 11:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:59.737 11:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:59.737 11:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:59.737 11:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:59.737 11:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:59.737 11:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:59.995 11:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:59.995 11:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:59.995 11:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:59.995 11:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:59.995 11:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:59.995 11:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:59.995 11:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:59.995 11:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:08:59.995 11:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:59.995 11:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:08:59.995 11:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:59.995 11:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:59.995 rmmod nvme_tcp 00:08:59.995 rmmod nvme_fabrics 00:09:00.254 rmmod nvme_keyring 00:09:00.254 11:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:00.254 11:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:09:00.254 11:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:09:00.254 11:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 62285 ']' 00:09:00.254 11:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 62285 00:09:00.254 11:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 62285 ']' 00:09:00.254 11:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 62285 00:09:00.254 11:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:09:00.254 11:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:00.254 11:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62285 00:09:00.254 11:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:00.254 11:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:00.254 killing process with pid 62285 00:09:00.254 11:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62285' 00:09:00.254 11:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 62285 00:09:00.254 11:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 62285 00:09:00.254 11:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:00.254 11:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:00.254 11:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:00.254 11:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:09:00.254 11:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:09:00.254 11:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:00.254 11:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:09:00.254 11:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:00.254 11:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:00.254 11:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:00.254 11:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:00.254 11:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:00.254 11:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:00.254 11:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:00.254 11:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:00.513 11:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:00.513 11:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:00.513 11:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:00.513 11:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:00.513 11:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:00.513 11:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:00.513 11:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:00.513 11:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:00.513 11:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:00.513 11:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:00.513 11:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:00.513 11:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@300 -- # return 0 00:09:00.513 00:09:00.513 real 0m45.418s 00:09:00.513 user 3m39.918s 00:09:00.513 sys 0m13.299s 00:09:00.513 11:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:00.513 11:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:00.513 ************************************ 00:09:00.513 END TEST nvmf_ns_hotplug_stress 00:09:00.513 ************************************ 00:09:00.514 11:21:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:00.514 11:21:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:00.514 11:21:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:00.514 11:21:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:00.514 ************************************ 00:09:00.514 START TEST nvmf_delete_subsystem 00:09:00.514 ************************************ 00:09:00.514 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:00.801 * Looking for test storage... 00:09:00.801 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:00.801 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:00.801 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:00.801 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:09:00.801 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:00.801 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:00.801 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:00.801 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:00.801 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:00.801 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:00.801 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:00.801 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:00.801 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:00.801 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:00.801 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:00.801 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:00.801 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:09:00.801 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:09:00.801 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:00.801 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:00.801 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:09:00.801 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:09:00.801 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:00.801 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:09:00.801 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:00.801 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:09:00.801 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:09:00.801 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:00.801 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:09:00.801 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:00.801 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:00.801 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:00.801 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:09:00.801 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:00.801 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:00.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.801 --rc genhtml_branch_coverage=1 00:09:00.801 --rc genhtml_function_coverage=1 00:09:00.801 --rc genhtml_legend=1 00:09:00.801 --rc geninfo_all_blocks=1 00:09:00.801 --rc geninfo_unexecuted_blocks=1 00:09:00.801 00:09:00.801 ' 00:09:00.801 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:00.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.801 --rc genhtml_branch_coverage=1 00:09:00.801 --rc genhtml_function_coverage=1 00:09:00.801 --rc genhtml_legend=1 00:09:00.801 --rc geninfo_all_blocks=1 00:09:00.801 --rc geninfo_unexecuted_blocks=1 00:09:00.801 00:09:00.801 ' 00:09:00.801 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:00.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.801 --rc genhtml_branch_coverage=1 00:09:00.801 --rc genhtml_function_coverage=1 00:09:00.801 --rc genhtml_legend=1 00:09:00.801 --rc geninfo_all_blocks=1 00:09:00.801 --rc geninfo_unexecuted_blocks=1 00:09:00.801 00:09:00.801 ' 00:09:00.801 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:00.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.801 --rc genhtml_branch_coverage=1 00:09:00.801 --rc genhtml_function_coverage=1 00:09:00.801 --rc genhtml_legend=1 00:09:00.801 --rc geninfo_all_blocks=1 00:09:00.801 --rc geninfo_unexecuted_blocks=1 00:09:00.801 00:09:00.801 ' 00:09:00.801 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:00.801 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:09:00.801 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:00.801 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:00.801 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:00.801 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:00.801 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:00.802 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:00.802 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:00.802 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:00.802 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:00.802 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:00.802 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:09:00.802 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=806d442c-08ba-4719-b76d-33169283459a 00:09:00.802 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:00.802 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:00.802 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:00.802 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:00.802 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:00.802 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:00.802 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:00.802 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:00.802 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:00.802 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.802 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.802 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.802 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:09:00.802 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.802 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:09:00.802 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:00.802 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:00.802 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:00.802 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:00.802 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:00.802 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:00.802 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:00.802 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:00.802 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:00.802 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:00.802 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:09:00.802 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:00.802 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:00.802 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:00.802 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:00.802 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:00.802 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:00.802 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:00.802 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:00.802 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:00.802 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:00.802 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:00.802 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:00.802 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:00.802 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:00.802 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:00.802 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:00.802 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:00.802 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:00.802 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:00.802 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:00.802 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:00.802 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:00.802 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:00.802 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:00.802 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:00.802 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:00.802 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:00.802 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:00.802 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:00.802 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:00.802 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:00.802 Cannot find device "nvmf_init_br" 00:09:00.802 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # true 00:09:00.802 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:00.802 Cannot find device "nvmf_init_br2" 00:09:00.802 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # true 00:09:00.802 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:00.802 Cannot find device "nvmf_tgt_br" 00:09:00.802 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # true 00:09:00.802 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:00.802 Cannot find device "nvmf_tgt_br2" 00:09:00.802 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # true 00:09:00.802 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:00.802 Cannot find device "nvmf_init_br" 00:09:00.802 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # true 00:09:00.802 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:00.802 Cannot find device "nvmf_init_br2" 00:09:00.802 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # true 00:09:00.802 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:00.802 Cannot find device "nvmf_tgt_br" 00:09:00.802 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # true 00:09:00.802 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:00.802 Cannot find device "nvmf_tgt_br2" 00:09:00.802 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # true 00:09:00.802 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:00.802 Cannot find device "nvmf_br" 00:09:00.802 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # true 00:09:00.802 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:00.802 Cannot find device "nvmf_init_if" 00:09:01.099 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # true 00:09:01.099 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:01.099 Cannot find device "nvmf_init_if2" 00:09:01.099 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # true 00:09:01.099 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:01.099 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:01.099 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # true 00:09:01.099 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:01.099 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:01.099 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # true 00:09:01.099 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:01.099 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:01.099 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:01.099 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:01.099 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:01.099 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:01.099 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:01.099 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:01.099 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:01.099 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:01.099 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:01.099 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:01.099 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:01.099 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:01.099 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:01.099 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:01.099 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:01.099 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:01.099 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:01.099 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:01.099 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:01.099 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:01.099 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:01.099 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:01.099 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:01.099 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:01.099 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:01.099 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:01.099 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:01.099 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:01.099 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:01.099 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:01.099 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:01.099 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:01.099 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:09:01.099 00:09:01.099 --- 10.0.0.3 ping statistics --- 00:09:01.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.099 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:09:01.099 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:01.099 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:01.099 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.037 ms 00:09:01.099 00:09:01.099 --- 10.0.0.4 ping statistics --- 00:09:01.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.099 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:09:01.099 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:01.099 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:01.099 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:09:01.099 00:09:01.099 --- 10.0.0.1 ping statistics --- 00:09:01.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.099 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:09:01.099 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:01.099 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:01.100 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:09:01.100 00:09:01.100 --- 10.0.0.2 ping statistics --- 00:09:01.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.100 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:09:01.100 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:01.100 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@461 -- # return 0 00:09:01.100 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:01.100 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:01.100 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:01.100 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:01.100 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:01.100 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:01.100 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:01.100 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:09:01.100 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:01.100 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:01.100 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:01.100 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=64666 00:09:01.100 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:09:01.100 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 64666 00:09:01.100 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 64666 ']' 00:09:01.100 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:01.100 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:01.100 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:01.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:01.100 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:01.100 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:01.359 [2024-11-20 11:21:46.726879] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:09:01.359 [2024-11-20 11:21:46.726983] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:01.359 [2024-11-20 11:21:46.877259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:01.359 [2024-11-20 11:21:46.910607] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:01.359 [2024-11-20 11:21:46.910665] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:01.359 [2024-11-20 11:21:46.910676] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:01.359 [2024-11-20 11:21:46.910684] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:01.359 [2024-11-20 11:21:46.910692] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:01.359 [2024-11-20 11:21:46.911510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:01.359 [2024-11-20 11:21:46.911524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.618 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:01.618 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:09:01.618 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:01.618 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:01.618 11:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:01.618 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:01.618 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:01.618 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.618 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:01.618 [2024-11-20 11:21:47.035597] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:01.618 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.618 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:01.618 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.618 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:01.618 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.618 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:01.618 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.618 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:01.618 [2024-11-20 11:21:47.052888] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:01.618 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.618 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:01.618 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.618 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:01.618 NULL1 00:09:01.618 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.618 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:01.618 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.618 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:01.618 Delay0 00:09:01.618 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.618 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:01.618 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.618 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:01.618 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.618 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=64698 00:09:01.618 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:09:01.618 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:01.876 [2024-11-20 11:21:47.256469] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:03.780 11:21:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:03.780 11:21:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.780 11:21:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 starting I/O failed: -6 00:09:03.780 Write completed with error (sct=0, sc=8) 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 Write completed with error (sct=0, sc=8) 00:09:03.780 starting I/O failed: -6 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 starting I/O failed: -6 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 starting I/O failed: -6 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 Write completed with error (sct=0, sc=8) 00:09:03.780 Write completed with error (sct=0, sc=8) 00:09:03.780 starting I/O failed: -6 00:09:03.780 Write completed with error (sct=0, sc=8) 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 Write completed with error (sct=0, sc=8) 00:09:03.780 Write completed with error (sct=0, sc=8) 00:09:03.780 starting I/O failed: -6 00:09:03.780 Write completed with error (sct=0, sc=8) 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 Write completed with error (sct=0, sc=8) 00:09:03.780 Write completed with error (sct=0, sc=8) 00:09:03.780 starting I/O failed: -6 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 starting I/O failed: -6 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 Write completed with error (sct=0, sc=8) 00:09:03.780 Write completed with error (sct=0, sc=8) 00:09:03.780 Write completed with error (sct=0, sc=8) 00:09:03.780 starting I/O failed: -6 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 Write completed with error (sct=0, sc=8) 00:09:03.780 starting I/O failed: -6 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 Write completed with error (sct=0, sc=8) 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 starting I/O failed: -6 00:09:03.780 Write completed with error (sct=0, sc=8) 00:09:03.780 Write completed with error (sct=0, sc=8) 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 starting I/O failed: -6 00:09:03.780 [2024-11-20 11:21:49.300981] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f3338000c40 is same with the state(6) to be set 00:09:03.780 Write completed with error (sct=0, sc=8) 00:09:03.780 Write completed with error (sct=0, sc=8) 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 Write completed with error (sct=0, sc=8) 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 Write completed with error (sct=0, sc=8) 00:09:03.780 Write completed with error (sct=0, sc=8) 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 Write completed with error (sct=0, sc=8) 00:09:03.780 Write completed with error (sct=0, sc=8) 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 Write completed with error (sct=0, sc=8) 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 Write completed with error (sct=0, sc=8) 00:09:03.780 Write completed with error (sct=0, sc=8) 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 Write completed with error (sct=0, sc=8) 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 Write completed with error (sct=0, sc=8) 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 Write completed with error (sct=0, sc=8) 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 Write completed with error (sct=0, sc=8) 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 Write completed with error (sct=0, sc=8) 00:09:03.780 Write completed with error (sct=0, sc=8) 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 Write completed with error (sct=0, sc=8) 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 Write completed with error (sct=0, sc=8) 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 Write completed with error (sct=0, sc=8) 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 Write completed with error (sct=0, sc=8) 00:09:03.780 Write completed with error (sct=0, sc=8) 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 Write completed with error (sct=0, sc=8) 00:09:03.780 Write completed with error (sct=0, sc=8) 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 [2024-11-20 11:21:49.302178] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f333800d350 is same with the state(6) to be set 00:09:03.780 Write completed with error (sct=0, sc=8) 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 starting I/O failed: -6 00:09:03.780 Write completed with error (sct=0, sc=8) 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 starting I/O failed: -6 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 Write completed with error (sct=0, sc=8) 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 starting I/O failed: -6 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 Write completed with error (sct=0, sc=8) 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 Write completed with error (sct=0, sc=8) 00:09:03.780 starting I/O failed: -6 00:09:03.780 Write completed with error (sct=0, sc=8) 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 Write completed with error (sct=0, sc=8) 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 starting I/O failed: -6 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 Write completed with error (sct=0, sc=8) 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 starting I/O failed: -6 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 starting I/O failed: -6 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 Write completed with error (sct=0, sc=8) 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 starting I/O failed: -6 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 Write completed with error (sct=0, sc=8) 00:09:03.780 Write completed with error (sct=0, sc=8) 00:09:03.780 starting I/O failed: -6 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 Write completed with error (sct=0, sc=8) 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 starting I/O failed: -6 00:09:03.780 Write completed with error (sct=0, sc=8) 00:09:03.780 Write completed with error (sct=0, sc=8) 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 starting I/O failed: -6 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.780 [2024-11-20 11:21:49.303316] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x623a50 is same with the state(6) to be set 00:09:03.780 Read completed with error (sct=0, sc=8) 00:09:03.781 Read completed with error (sct=0, sc=8) 00:09:03.781 Read completed with error (sct=0, sc=8) 00:09:03.781 Read completed with error (sct=0, sc=8) 00:09:03.781 Read completed with error (sct=0, sc=8) 00:09:03.781 Read completed with error (sct=0, sc=8) 00:09:03.781 Read completed with error (sct=0, sc=8) 00:09:03.781 Read completed with error (sct=0, sc=8) 00:09:03.781 Read completed with error (sct=0, sc=8) 00:09:03.781 Read completed with error (sct=0, sc=8) 00:09:03.781 Write completed with error (sct=0, sc=8) 00:09:03.781 Read completed with error (sct=0, sc=8) 00:09:03.781 Read completed with error (sct=0, sc=8) 00:09:03.781 Read completed with error (sct=0, sc=8) 00:09:03.781 Read completed with error (sct=0, sc=8) 00:09:03.781 Read completed with error (sct=0, sc=8) 00:09:03.781 Read completed with error (sct=0, sc=8) 00:09:03.781 Read completed with error (sct=0, sc=8) 00:09:03.781 Read completed with error (sct=0, sc=8) 00:09:03.781 Write completed with error (sct=0, sc=8) 00:09:03.781 Read completed with error (sct=0, sc=8) 00:09:03.781 Read completed with error (sct=0, sc=8) 00:09:03.781 Read completed with error (sct=0, sc=8) 00:09:03.781 Read completed with error (sct=0, sc=8) 00:09:03.781 Read completed with error (sct=0, sc=8) 00:09:03.781 Read completed with error (sct=0, sc=8) 00:09:03.781 Write completed with error (sct=0, sc=8) 00:09:03.781 Read completed with error (sct=0, sc=8) 00:09:03.781 Read completed with error (sct=0, sc=8) 00:09:03.781 Write completed with error (sct=0, sc=8) 00:09:03.781 Read completed with error (sct=0, sc=8) 00:09:03.781 Write completed with error (sct=0, sc=8) 00:09:03.781 Read completed with error (sct=0, sc=8) 00:09:03.781 Write completed with error (sct=0, sc=8) 00:09:03.781 Write completed with error (sct=0, sc=8) 00:09:03.781 Read completed with error (sct=0, sc=8) 00:09:03.781 Read completed with error (sct=0, sc=8) 00:09:03.781 Read completed with error (sct=0, sc=8) 00:09:03.781 Read completed with error (sct=0, sc=8) 00:09:03.781 Write completed with error (sct=0, sc=8) 00:09:03.781 Read completed with error (sct=0, sc=8) 00:09:03.781 Read completed with error (sct=0, sc=8) 00:09:03.781 Read completed with error (sct=0, sc=8) 00:09:03.781 Read completed with error (sct=0, sc=8) 00:09:03.781 Read completed with error (sct=0, sc=8) 00:09:03.781 Write completed with error (sct=0, sc=8) 00:09:03.781 Read completed with error (sct=0, sc=8) 00:09:03.781 Read completed with error (sct=0, sc=8) 00:09:03.781 Read completed with error (sct=0, sc=8) 00:09:03.781 Read completed with error (sct=0, sc=8) 00:09:03.781 Read completed with error (sct=0, sc=8) 00:09:03.781 Write completed with error (sct=0, sc=8) 00:09:03.781 Write completed with error (sct=0, sc=8) 00:09:04.717 [2024-11-20 11:21:50.274491] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61fee0 is same with the state(6) to be set 00:09:04.717 Read completed with error (sct=0, sc=8) 00:09:04.717 Read completed with error (sct=0, sc=8) 00:09:04.717 Read completed with error (sct=0, sc=8) 00:09:04.717 Read completed with error (sct=0, sc=8) 00:09:04.717 Read completed with error (sct=0, sc=8) 00:09:04.717 Write completed with error (sct=0, sc=8) 00:09:04.718 Write completed with error (sct=0, sc=8) 00:09:04.718 Read completed with error (sct=0, sc=8) 00:09:04.718 Read completed with error (sct=0, sc=8) 00:09:04.718 Read completed with error (sct=0, sc=8) 00:09:04.718 Read completed with error (sct=0, sc=8) 00:09:04.718 Read completed with error (sct=0, sc=8) 00:09:04.718 Write completed with error (sct=0, sc=8) 00:09:04.718 Write completed with error (sct=0, sc=8) 00:09:04.718 Write completed with error (sct=0, sc=8) 00:09:04.718 Write completed with error (sct=0, sc=8) 00:09:04.718 Read completed with error (sct=0, sc=8) 00:09:04.718 Write completed with error (sct=0, sc=8) 00:09:04.718 Write completed with error (sct=0, sc=8) 00:09:04.718 Read completed with error (sct=0, sc=8) 00:09:04.718 Write completed with error (sct=0, sc=8) 00:09:04.718 Read completed with error (sct=0, sc=8) 00:09:04.718 Write completed with error (sct=0, sc=8) 00:09:04.718 Read completed with error (sct=0, sc=8) 00:09:04.718 Write completed with error (sct=0, sc=8) 00:09:04.718 Read completed with error (sct=0, sc=8) 00:09:04.718 Read completed with error (sct=0, sc=8) 00:09:04.718 [2024-11-20 11:21:50.301075] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6247e0 is same with the state(6) to be set 00:09:04.718 Read completed with error (sct=0, sc=8) 00:09:04.718 Read completed with error (sct=0, sc=8) 00:09:04.718 Read completed with error (sct=0, sc=8) 00:09:04.718 Read completed with error (sct=0, sc=8) 00:09:04.718 Read completed with error (sct=0, sc=8) 00:09:04.718 Write completed with error (sct=0, sc=8) 00:09:04.718 Read completed with error (sct=0, sc=8) 00:09:04.718 Read completed with error (sct=0, sc=8) 00:09:04.718 Read completed with error (sct=0, sc=8) 00:09:04.718 Read completed with error (sct=0, sc=8) 00:09:04.718 Write completed with error (sct=0, sc=8) 00:09:04.718 Write completed with error (sct=0, sc=8) 00:09:04.718 Read completed with error (sct=0, sc=8) 00:09:04.718 Read completed with error (sct=0, sc=8) 00:09:04.718 Read completed with error (sct=0, sc=8) 00:09:04.718 Write completed with error (sct=0, sc=8) 00:09:04.718 Read completed with error (sct=0, sc=8) 00:09:04.718 Read completed with error (sct=0, sc=8) 00:09:04.718 Write completed with error (sct=0, sc=8) 00:09:04.718 Read completed with error (sct=0, sc=8) 00:09:04.718 Read completed with error (sct=0, sc=8) 00:09:04.718 Write completed with error (sct=0, sc=8) 00:09:04.718 Write completed with error (sct=0, sc=8) 00:09:04.718 Read completed with error (sct=0, sc=8) 00:09:04.718 Read completed with error (sct=0, sc=8) 00:09:04.718 Read completed with error (sct=0, sc=8) 00:09:04.718 Read completed with error (sct=0, sc=8) 00:09:04.718 Write completed with error (sct=0, sc=8) 00:09:04.718 Read completed with error (sct=0, sc=8) 00:09:04.718 Read completed with error (sct=0, sc=8) 00:09:04.718 Read completed with error (sct=0, sc=8) 00:09:04.718 Read completed with error (sct=0, sc=8) 00:09:04.718 Read completed with error (sct=0, sc=8) 00:09:04.718 [2024-11-20 11:21:50.301759] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x623c30 is same with the state(6) to be set 00:09:04.718 Read completed with error (sct=0, sc=8) 00:09:04.718 Read completed with error (sct=0, sc=8) 00:09:04.718 Read completed with error (sct=0, sc=8) 00:09:04.718 Read completed with error (sct=0, sc=8) 00:09:04.718 Read completed with error (sct=0, sc=8) 00:09:04.718 Read completed with error (sct=0, sc=8) 00:09:04.718 Write completed with error (sct=0, sc=8) 00:09:04.718 Read completed with error (sct=0, sc=8) 00:09:04.718 Write completed with error (sct=0, sc=8) 00:09:04.718 Write completed with error (sct=0, sc=8) 00:09:04.718 Read completed with error (sct=0, sc=8) 00:09:04.718 Read completed with error (sct=0, sc=8) 00:09:04.718 Read completed with error (sct=0, sc=8) 00:09:04.718 Read completed with error (sct=0, sc=8) 00:09:04.718 Write completed with error (sct=0, sc=8) 00:09:04.718 Read completed with error (sct=0, sc=8) 00:09:04.718 Read completed with error (sct=0, sc=8) 00:09:04.718 Write completed with error (sct=0, sc=8) 00:09:04.718 Read completed with error (sct=0, sc=8) 00:09:04.718 Read completed with error (sct=0, sc=8) 00:09:04.718 Read completed with error (sct=0, sc=8) 00:09:04.718 Read completed with error (sct=0, sc=8) 00:09:04.718 Read completed with error (sct=0, sc=8) 00:09:04.718 Read completed with error (sct=0, sc=8) 00:09:04.718 Read completed with error (sct=0, sc=8) 00:09:04.718 [2024-11-20 11:21:50.302141] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f333800d020 is same with the state(6) to be set 00:09:04.718 Read completed with error (sct=0, sc=8) 00:09:04.718 Read completed with error (sct=0, sc=8) 00:09:04.718 Read completed with error (sct=0, sc=8) 00:09:04.718 Read completed with error (sct=0, sc=8) 00:09:04.718 Read completed with error (sct=0, sc=8) 00:09:04.718 Read completed with error (sct=0, sc=8) 00:09:04.718 Read completed with error (sct=0, sc=8) 00:09:04.718 Read completed with error (sct=0, sc=8) 00:09:04.718 Read completed with error (sct=0, sc=8) 00:09:04.718 Read completed with error (sct=0, sc=8) 00:09:04.718 Write completed with error (sct=0, sc=8) 00:09:04.718 Write completed with error (sct=0, sc=8) 00:09:04.718 Read completed with error (sct=0, sc=8) 00:09:04.718 Read completed with error (sct=0, sc=8) 00:09:04.718 Read completed with error (sct=0, sc=8) 00:09:04.718 Write completed with error (sct=0, sc=8) 00:09:04.718 Write completed with error (sct=0, sc=8) 00:09:04.718 Read completed with error (sct=0, sc=8) 00:09:04.718 Read completed with error (sct=0, sc=8) 00:09:04.718 Read completed with error (sct=0, sc=8) 00:09:04.718 Write completed with error (sct=0, sc=8) 00:09:04.718 Read completed with error (sct=0, sc=8) 00:09:04.718 Read completed with error (sct=0, sc=8) 00:09:04.718 Write completed with error (sct=0, sc=8) 00:09:04.718 Read completed with error (sct=0, sc=8) 00:09:04.718 [2024-11-20 11:21:50.302571] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f333800d680 is same with the state(6) to be set 00:09:04.718 Initializing NVMe Controllers 00:09:04.718 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:09:04.718 Controller IO queue size 128, less than required. 00:09:04.718 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:04.718 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:04.718 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:04.718 Initialization complete. Launching workers. 00:09:04.718 ======================================================== 00:09:04.718 Latency(us) 00:09:04.718 Device Information : IOPS MiB/s Average min max 00:09:04.718 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 164.72 0.08 957720.45 823.62 2001672.11 00:09:04.718 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 171.17 0.08 892470.44 805.91 1012957.41 00:09:04.718 ======================================================== 00:09:04.718 Total : 335.88 0.16 924468.97 805.91 2001672.11 00:09:04.718 00:09:04.718 [2024-11-20 11:21:50.303521] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61fee0 (9): Bad file descriptor 00:09:04.718 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:09:04.977 11:21:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.977 11:21:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:09:04.977 11:21:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 64698 00:09:04.977 11:21:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:09:05.235 11:21:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:09:05.235 11:21:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 64698 00:09:05.236 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (64698) - No such process 00:09:05.236 11:21:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 64698 00:09:05.236 11:21:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:09:05.236 11:21:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 64698 00:09:05.236 11:21:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:09:05.236 11:21:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:05.236 11:21:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:09:05.236 11:21:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:05.236 11:21:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 64698 00:09:05.236 11:21:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:09:05.236 11:21:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:05.236 11:21:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:05.236 11:21:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:05.236 11:21:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:05.236 11:21:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.236 11:21:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:05.494 11:21:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.494 11:21:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:05.494 11:21:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.494 11:21:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:05.494 [2024-11-20 11:21:50.826404] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:05.494 11:21:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.494 11:21:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:05.494 11:21:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.494 11:21:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:05.494 11:21:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.494 11:21:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=64749 00:09:05.494 11:21:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:05.494 11:21:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:09:05.494 11:21:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 64749 00:09:05.494 11:21:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:05.494 [2024-11-20 11:21:51.018565] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:06.061 11:21:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:06.061 11:21:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 64749 00:09:06.061 11:21:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:06.319 11:21:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:06.319 11:21:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 64749 00:09:06.319 11:21:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:06.886 11:21:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:06.886 11:21:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 64749 00:09:06.886 11:21:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:07.453 11:21:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:07.453 11:21:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 64749 00:09:07.453 11:21:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:08.030 11:21:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:08.030 11:21:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 64749 00:09:08.030 11:21:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:08.289 11:21:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:08.289 11:21:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 64749 00:09:08.289 11:21:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:08.550 Initializing NVMe Controllers 00:09:08.550 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:09:08.550 Controller IO queue size 128, less than required. 00:09:08.550 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:08.550 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:08.550 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:08.550 Initialization complete. Launching workers. 00:09:08.550 ======================================================== 00:09:08.550 Latency(us) 00:09:08.550 Device Information : IOPS MiB/s Average min max 00:09:08.550 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003402.16 1000141.64 1012322.39 00:09:08.550 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005543.06 1000167.20 1013581.24 00:09:08.550 ======================================================== 00:09:08.550 Total : 256.00 0.12 1004472.61 1000141.64 1013581.24 00:09:08.550 00:09:08.825 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:08.825 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 64749 00:09:08.825 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (64749) - No such process 00:09:08.825 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 64749 00:09:08.825 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:08.825 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:09:08.825 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:08.825 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:09:09.083 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:09.083 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:09:09.083 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:09.083 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:09.083 rmmod nvme_tcp 00:09:09.083 rmmod nvme_fabrics 00:09:09.083 rmmod nvme_keyring 00:09:09.083 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:09.083 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:09:09.083 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:09:09.084 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 64666 ']' 00:09:09.084 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 64666 00:09:09.084 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 64666 ']' 00:09:09.084 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 64666 00:09:09.084 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:09:09.084 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:09.084 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64666 00:09:09.084 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:09.084 killing process with pid 64666 00:09:09.084 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:09.084 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64666' 00:09:09.084 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 64666 00:09:09.084 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 64666 00:09:09.084 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:09.084 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:09.084 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:09.084 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:09:09.084 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:09:09.084 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:09.084 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:09:09.084 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:09.084 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:09.084 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:09.343 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:09.343 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:09.343 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:09.343 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:09.343 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:09.343 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:09.343 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:09.343 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:09.343 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:09.343 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:09.343 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:09.343 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:09.343 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:09.343 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:09.343 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:09.343 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.343 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@300 -- # return 0 00:09:09.343 00:09:09.343 real 0m8.861s 00:09:09.343 user 0m27.153s 00:09:09.343 sys 0m1.547s 00:09:09.343 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:09.343 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:09.343 ************************************ 00:09:09.343 END TEST nvmf_delete_subsystem 00:09:09.343 ************************************ 00:09:09.603 11:21:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:09.603 11:21:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:09.603 11:21:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:09.603 11:21:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:09.603 ************************************ 00:09:09.603 START TEST nvmf_host_management 00:09:09.603 ************************************ 00:09:09.603 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:09.603 * Looking for test storage... 00:09:09.603 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:09.603 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:09.603 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:09:09.603 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:09.603 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:09.603 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:09.603 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:09.603 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:09.603 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:09:09.603 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:09:09.603 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:09:09.603 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:09:09.603 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:09:09.603 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:09:09.603 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:09:09.603 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:09.603 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:09:09.603 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:09:09.603 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:09.603 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:09.603 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:09:09.603 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:09:09.603 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:09.603 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:09:09.603 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:09:09.603 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:09:09.603 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:09:09.603 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:09.603 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:09:09.603 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:09:09.603 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:09.603 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:09.603 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:09:09.603 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:09.603 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:09.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.603 --rc genhtml_branch_coverage=1 00:09:09.603 --rc genhtml_function_coverage=1 00:09:09.603 --rc genhtml_legend=1 00:09:09.603 --rc geninfo_all_blocks=1 00:09:09.603 --rc geninfo_unexecuted_blocks=1 00:09:09.603 00:09:09.603 ' 00:09:09.603 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:09.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.603 --rc genhtml_branch_coverage=1 00:09:09.603 --rc genhtml_function_coverage=1 00:09:09.603 --rc genhtml_legend=1 00:09:09.603 --rc geninfo_all_blocks=1 00:09:09.603 --rc geninfo_unexecuted_blocks=1 00:09:09.603 00:09:09.603 ' 00:09:09.603 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:09.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.603 --rc genhtml_branch_coverage=1 00:09:09.603 --rc genhtml_function_coverage=1 00:09:09.603 --rc genhtml_legend=1 00:09:09.603 --rc geninfo_all_blocks=1 00:09:09.603 --rc geninfo_unexecuted_blocks=1 00:09:09.603 00:09:09.603 ' 00:09:09.603 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:09.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.603 --rc genhtml_branch_coverage=1 00:09:09.603 --rc genhtml_function_coverage=1 00:09:09.603 --rc genhtml_legend=1 00:09:09.603 --rc geninfo_all_blocks=1 00:09:09.603 --rc geninfo_unexecuted_blocks=1 00:09:09.603 00:09:09.603 ' 00:09:09.603 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:09.603 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:09:09.603 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:09.603 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:09.603 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:09.603 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:09.603 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:09.603 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:09.603 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:09.603 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:09.603 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:09.603 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:09.603 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:09:09.603 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=806d442c-08ba-4719-b76d-33169283459a 00:09:09.603 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:09.603 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:09.603 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:09.603 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:09.603 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:09.603 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:09:09.603 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:09.603 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:09.603 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:09.603 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.603 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.604 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.604 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:09:09.604 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.604 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:09:09.604 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:09.604 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:09.604 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:09.604 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:09.604 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:09.604 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:09.604 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:09.604 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:09.604 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:09.604 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:09.604 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:09.604 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:09.604 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:09:09.604 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:09.604 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:09.604 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:09.604 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:09.604 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:09.604 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:09.604 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:09.604 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.604 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:09.604 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:09.604 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:09.604 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:09.604 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:09.604 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:09.604 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:09.604 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:09.604 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:09.604 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:09.604 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:09.604 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:09.604 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:09.604 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:09.604 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:09.604 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:09.604 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:09.604 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:09.604 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:09.604 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:09.604 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:09.604 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:09.604 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:09.604 Cannot find device "nvmf_init_br" 00:09:09.604 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:09:09.604 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:09.604 Cannot find device "nvmf_init_br2" 00:09:09.604 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:09:09.604 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:09.604 Cannot find device "nvmf_tgt_br" 00:09:09.604 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:09:09.604 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:09.863 Cannot find device "nvmf_tgt_br2" 00:09:09.863 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:09:09.863 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:09.863 Cannot find device "nvmf_init_br" 00:09:09.863 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:09:09.863 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:09.863 Cannot find device "nvmf_init_br2" 00:09:09.863 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:09:09.863 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:09.863 Cannot find device "nvmf_tgt_br" 00:09:09.863 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:09:09.863 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:09.863 Cannot find device "nvmf_tgt_br2" 00:09:09.863 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:09:09.863 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:09.863 Cannot find device "nvmf_br" 00:09:09.863 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:09:09.863 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:09.863 Cannot find device "nvmf_init_if" 00:09:09.863 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:09:09.863 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:09.863 Cannot find device "nvmf_init_if2" 00:09:09.863 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:09:09.863 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:09.863 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:09.863 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:09:09.863 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:09.863 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:09.863 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:09:09.863 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:09.863 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:09.863 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:09.863 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:09.863 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:09.863 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:09.863 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:09.863 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:09.863 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:09.863 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:09.863 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:09.863 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:09.863 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:09.863 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:09.863 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:09.863 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:09.863 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:09.863 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:09.863 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:09.863 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:09.863 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:09.863 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:10.122 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:10.122 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:10.122 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:10.122 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:10.122 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:10.122 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:10.122 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:10.122 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:10.122 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:10.122 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:10.122 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:10.122 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:10.122 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.088 ms 00:09:10.122 00:09:10.122 --- 10.0.0.3 ping statistics --- 00:09:10.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:10.122 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:09:10.122 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:10.122 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:10.122 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.133 ms 00:09:10.122 00:09:10.122 --- 10.0.0.4 ping statistics --- 00:09:10.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:10.122 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:09:10.122 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:10.122 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:10.122 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:09:10.122 00:09:10.122 --- 10.0.0.1 ping statistics --- 00:09:10.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:10.122 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:09:10.122 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:10.122 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:10.122 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:09:10.122 00:09:10.122 --- 10.0.0.2 ping statistics --- 00:09:10.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:10.122 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:09:10.122 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:10.122 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:09:10.123 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:10.123 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:10.123 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:10.123 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:10.123 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:10.123 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:10.123 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:10.123 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:09:10.123 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:09:10.123 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:09:10.123 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:10.123 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:10.123 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:10.123 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=65026 00:09:10.123 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 65026 00:09:10.123 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:09:10.123 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 65026 ']' 00:09:10.123 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:10.123 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:10.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:10.123 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:10.123 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:10.123 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:10.123 [2024-11-20 11:21:55.652704] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:09:10.123 [2024-11-20 11:21:55.652802] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:10.381 [2024-11-20 11:21:55.806000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:10.381 [2024-11-20 11:21:55.845163] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:10.381 [2024-11-20 11:21:55.845217] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:10.381 [2024-11-20 11:21:55.845231] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:10.381 [2024-11-20 11:21:55.845241] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:10.381 [2024-11-20 11:21:55.845249] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:10.381 [2024-11-20 11:21:55.847662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:10.381 [2024-11-20 11:21:55.847830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:10.381 [2024-11-20 11:21:55.848551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:10.381 [2024-11-20 11:21:55.848599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:10.381 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:10.381 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:09:10.381 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:10.381 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:10.381 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:10.638 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:10.638 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:10.638 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.638 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:10.638 [2024-11-20 11:21:55.980048] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:10.638 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.638 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:09:10.638 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:10.638 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:10.638 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:09:10.638 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:09:10.638 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:09:10.638 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.638 11:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:10.638 Malloc0 00:09:10.638 [2024-11-20 11:21:56.049960] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:10.638 11:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.638 11:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:09:10.638 11:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:10.638 11:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:10.638 11:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=65090 00:09:10.638 11:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 65090 /var/tmp/bdevperf.sock 00:09:10.638 11:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 65090 ']' 00:09:10.638 11:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:10.639 11:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:10.639 11:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:10.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:10.639 11:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:09:10.639 11:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:10.639 11:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:10.639 11:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:09:10.639 11:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:09:10.639 11:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:09:10.639 11:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:10.639 11:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:10.639 { 00:09:10.639 "params": { 00:09:10.639 "name": "Nvme$subsystem", 00:09:10.639 "trtype": "$TEST_TRANSPORT", 00:09:10.639 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:10.639 "adrfam": "ipv4", 00:09:10.639 "trsvcid": "$NVMF_PORT", 00:09:10.639 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:10.639 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:10.639 "hdgst": ${hdgst:-false}, 00:09:10.639 "ddgst": ${ddgst:-false} 00:09:10.639 }, 00:09:10.639 "method": "bdev_nvme_attach_controller" 00:09:10.639 } 00:09:10.639 EOF 00:09:10.639 )") 00:09:10.639 11:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:09:10.639 11:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:09:10.639 11:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:09:10.639 11:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:10.639 "params": { 00:09:10.639 "name": "Nvme0", 00:09:10.639 "trtype": "tcp", 00:09:10.639 "traddr": "10.0.0.3", 00:09:10.639 "adrfam": "ipv4", 00:09:10.639 "trsvcid": "4420", 00:09:10.639 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:10.639 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:10.639 "hdgst": false, 00:09:10.639 "ddgst": false 00:09:10.639 }, 00:09:10.639 "method": "bdev_nvme_attach_controller" 00:09:10.639 }' 00:09:10.639 [2024-11-20 11:21:56.175948] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:09:10.639 [2024-11-20 11:21:56.176088] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65090 ] 00:09:10.897 [2024-11-20 11:21:56.336958] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.897 [2024-11-20 11:21:56.371711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.155 Running I/O for 10 seconds... 00:09:11.155 11:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:11.155 11:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:09:11.155 11:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:09:11.155 11:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.155 11:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:11.155 11:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.155 11:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:11.155 11:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:09:11.155 11:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:09:11.155 11:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:09:11.155 11:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:09:11.155 11:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:09:11.155 11:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:09:11.155 11:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:09:11.155 11:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:09:11.155 11:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:09:11.155 11:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.155 11:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:11.155 11:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.155 11:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:09:11.155 11:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:09:11.155 11:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:09:11.415 11:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:09:11.415 11:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:09:11.415 11:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:09:11.415 11:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:09:11.415 11:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.415 11:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:11.415 11:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.415 11:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=554 00:09:11.415 11:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 554 -ge 100 ']' 00:09:11.415 11:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:09:11.415 11:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:09:11.415 11:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:09:11.415 11:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:11.415 11:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.415 11:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:11.415 [2024-11-20 11:21:56.967104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:09:11.415 [2024-11-20 11:21:56.967162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.415 [2024-11-20 11:21:56.967177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:09:11.415 [2024-11-20 11:21:56.967187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.415 [2024-11-20 11:21:56.967197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:09:11.415 [2024-11-20 11:21:56.967206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.415 [2024-11-20 11:21:56.967216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:09:11.415 [2024-11-20 11:21:56.967225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.415 [2024-11-20 11:21:56.967235] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeec660 is same with the state(6) to be set 00:09:11.415 [2024-11-20 11:21:56.967686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.415 [2024-11-20 11:21:56.967705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.415 [2024-11-20 11:21:56.967726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.415 [2024-11-20 11:21:56.967737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.415 [2024-11-20 11:21:56.967749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.415 [2024-11-20 11:21:56.967759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.415 [2024-11-20 11:21:56.967770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.415 [2024-11-20 11:21:56.967779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.415 [2024-11-20 11:21:56.967791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.415 [2024-11-20 11:21:56.967800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.415 [2024-11-20 11:21:56.967812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.415 [2024-11-20 11:21:56.967821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.415 [2024-11-20 11:21:56.967832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.415 [2024-11-20 11:21:56.967841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.415 [2024-11-20 11:21:56.967852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.415 [2024-11-20 11:21:56.967861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.415 [2024-11-20 11:21:56.967873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.415 [2024-11-20 11:21:56.967884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.415 [2024-11-20 11:21:56.967895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.416 [2024-11-20 11:21:56.967910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.416 [2024-11-20 11:21:56.967922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.416 [2024-11-20 11:21:56.967932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.416 [2024-11-20 11:21:56.967943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.416 [2024-11-20 11:21:56.967952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.416 [2024-11-20 11:21:56.967963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.416 [2024-11-20 11:21:56.967973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.416 [2024-11-20 11:21:56.967984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.416 [2024-11-20 11:21:56.967993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.416 [2024-11-20 11:21:56.968005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.416 [2024-11-20 11:21:56.968014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.416 [2024-11-20 11:21:56.968026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.416 [2024-11-20 11:21:56.968035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.416 [2024-11-20 11:21:56.968047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.416 [2024-11-20 11:21:56.968056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.416 [2024-11-20 11:21:56.968067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.416 [2024-11-20 11:21:56.968076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.416 [2024-11-20 11:21:56.968087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.416 [2024-11-20 11:21:56.968097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.416 [2024-11-20 11:21:56.968108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.416 [2024-11-20 11:21:56.968118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.416 [2024-11-20 11:21:56.968133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.416 [2024-11-20 11:21:56.968143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.416 [2024-11-20 11:21:56.968154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.416 [2024-11-20 11:21:56.968163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.416 [2024-11-20 11:21:56.968175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.416 [2024-11-20 11:21:56.968184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.416 [2024-11-20 11:21:56.968196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.416 [2024-11-20 11:21:56.968205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.416 [2024-11-20 11:21:56.968216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.416 [2024-11-20 11:21:56.968225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.416 [2024-11-20 11:21:56.968237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.416 [2024-11-20 11:21:56.968248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.416 [2024-11-20 11:21:56.968260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.416 [2024-11-20 11:21:56.968269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.416 [2024-11-20 11:21:56.968280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.416 [2024-11-20 11:21:56.968290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.416 [2024-11-20 11:21:56.968301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.416 [2024-11-20 11:21:56.968310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.416 [2024-11-20 11:21:56.968321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.416 [2024-11-20 11:21:56.968331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.416 [2024-11-20 11:21:56.968342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.416 [2024-11-20 11:21:56.968352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.416 [2024-11-20 11:21:56.968363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.416 [2024-11-20 11:21:56.968372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.416 [2024-11-20 11:21:56.968384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.416 [2024-11-20 11:21:56.968393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.416 [2024-11-20 11:21:56.968405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.416 [2024-11-20 11:21:56.968415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.416 [2024-11-20 11:21:56.968426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.416 [2024-11-20 11:21:56.968436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.416 [2024-11-20 11:21:56.968447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.416 [2024-11-20 11:21:56.968456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.416 [2024-11-20 11:21:56.968467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.416 [2024-11-20 11:21:56.968476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.416 [2024-11-20 11:21:56.968488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.416 [2024-11-20 11:21:56.968497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.416 [2024-11-20 11:21:56.968508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.416 [2024-11-20 11:21:56.968518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.416 [2024-11-20 11:21:56.968529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.416 [2024-11-20 11:21:56.968538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.416 [2024-11-20 11:21:56.968550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.416 [2024-11-20 11:21:56.968559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.416 [2024-11-20 11:21:56.968570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.416 [2024-11-20 11:21:56.968581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.416 [2024-11-20 11:21:56.968593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.416 [2024-11-20 11:21:56.968602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.416 [2024-11-20 11:21:56.968625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.416 [2024-11-20 11:21:56.968636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.416 [2024-11-20 11:21:56.968647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.416 [2024-11-20 11:21:56.968657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.416 [2024-11-20 11:21:56.968668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.417 [2024-11-20 11:21:56.968678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.417 [2024-11-20 11:21:56.968690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.417 [2024-11-20 11:21:56.968699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.417 [2024-11-20 11:21:56.968710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.417 [2024-11-20 11:21:56.968720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.417 [2024-11-20 11:21:56.968732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.417 [2024-11-20 11:21:56.968741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.417 [2024-11-20 11:21:56.968752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.417 [2024-11-20 11:21:56.968761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.417 [2024-11-20 11:21:56.968773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.417 [2024-11-20 11:21:56.968782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.417 [2024-11-20 11:21:56.968794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.417 [2024-11-20 11:21:56.968803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.417 [2024-11-20 11:21:56.968814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.417 [2024-11-20 11:21:56.968824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.417 [2024-11-20 11:21:56.968835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.417 [2024-11-20 11:21:56.968844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.417 [2024-11-20 11:21:56.968855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.417 [2024-11-20 11:21:56.968864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.417 [2024-11-20 11:21:56.968875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.417 [2024-11-20 11:21:56.968885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.417 [2024-11-20 11:21:56.968896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.417 [2024-11-20 11:21:56.968905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.417 [2024-11-20 11:21:56.968917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.417 [2024-11-20 11:21:56.968929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.417 [2024-11-20 11:21:56.968940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.417 [2024-11-20 11:21:56.968950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.417 [2024-11-20 11:21:56.968961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.417 [2024-11-20 11:21:56.968970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.417 [2024-11-20 11:21:56.968981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.417 [2024-11-20 11:21:56.968990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.417 [2024-11-20 11:21:56.969001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.417 [2024-11-20 11:21:56.969011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.417 [2024-11-20 11:21:56.969022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.417 [2024-11-20 11:21:56.969031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.417 [2024-11-20 11:21:56.969043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.417 [2024-11-20 11:21:56.969052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.417 [2024-11-20 11:21:56.969083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:09:11.417 [2024-11-20 11:21:56.970288] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:09:11.417 task offset: 81920 on job bdev=Nvme0n1 fails 00:09:11.417 00:09:11.417 Latency(us) 00:09:11.417 [2024-11-20T11:21:57.006Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:11.417 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:11.417 Job: Nvme0n1 ended in about 0.46 seconds with error 00:09:11.417 Verification LBA range: start 0x0 length 0x400 00:09:11.417 Nvme0n1 : 0.46 1404.75 87.80 140.48 0.00 40047.92 2442.71 38606.66 00:09:11.417 [2024-11-20T11:21:57.006Z] =================================================================================================================== 00:09:11.417 [2024-11-20T11:21:57.006Z] Total : 1404.75 87.80 140.48 0.00 40047.92 2442.71 38606.66 00:09:11.417 11:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.417 11:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:11.417 11:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.417 11:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:11.417 [2024-11-20 11:21:56.972438] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:11.417 [2024-11-20 11:21:56.972464] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeec660 (9): Bad file descriptor 00:09:11.417 11:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.417 11:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:09:11.417 [2024-11-20 11:21:56.982652] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:09:12.792 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 65090 00:09:12.792 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (65090) - No such process 00:09:12.792 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:09:12.792 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:09:12.792 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:09:12.792 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:09:12.792 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:09:12.792 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:09:12.792 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:12.792 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:12.792 { 00:09:12.792 "params": { 00:09:12.792 "name": "Nvme$subsystem", 00:09:12.792 "trtype": "$TEST_TRANSPORT", 00:09:12.792 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:12.792 "adrfam": "ipv4", 00:09:12.792 "trsvcid": "$NVMF_PORT", 00:09:12.792 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:12.792 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:12.792 "hdgst": ${hdgst:-false}, 00:09:12.792 "ddgst": ${ddgst:-false} 00:09:12.792 }, 00:09:12.792 "method": "bdev_nvme_attach_controller" 00:09:12.792 } 00:09:12.792 EOF 00:09:12.792 )") 00:09:12.792 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:09:12.792 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:09:12.792 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:09:12.792 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:12.792 "params": { 00:09:12.792 "name": "Nvme0", 00:09:12.792 "trtype": "tcp", 00:09:12.792 "traddr": "10.0.0.3", 00:09:12.792 "adrfam": "ipv4", 00:09:12.792 "trsvcid": "4420", 00:09:12.792 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:12.792 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:12.792 "hdgst": false, 00:09:12.792 "ddgst": false 00:09:12.792 }, 00:09:12.792 "method": "bdev_nvme_attach_controller" 00:09:12.792 }' 00:09:12.792 [2024-11-20 11:21:58.039464] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:09:12.793 [2024-11-20 11:21:58.039551] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65131 ] 00:09:12.793 [2024-11-20 11:21:58.187926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.793 [2024-11-20 11:21:58.231699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.051 Running I/O for 1 seconds... 00:09:13.984 1408.00 IOPS, 88.00 MiB/s 00:09:13.984 Latency(us) 00:09:13.984 [2024-11-20T11:21:59.574Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:13.985 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:13.985 Verification LBA range: start 0x0 length 0x400 00:09:13.985 Nvme0n1 : 1.01 1460.45 91.28 0.00 0.00 42830.58 8043.05 42657.98 00:09:13.985 [2024-11-20T11:21:59.574Z] =================================================================================================================== 00:09:13.985 [2024-11-20T11:21:59.574Z] Total : 1460.45 91.28 0.00 0.00 42830.58 8043.05 42657.98 00:09:13.985 11:21:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:09:13.985 11:21:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:09:13.985 11:21:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:09:13.985 11:21:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:09:13.985 11:21:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:09:13.985 11:21:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:13.985 11:21:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:09:14.243 11:21:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:14.243 11:21:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:09:14.243 11:21:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:14.243 11:21:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:14.243 rmmod nvme_tcp 00:09:14.243 rmmod nvme_fabrics 00:09:14.243 rmmod nvme_keyring 00:09:14.243 11:21:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:14.243 11:21:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:09:14.243 11:21:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:09:14.243 11:21:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 65026 ']' 00:09:14.243 11:21:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 65026 00:09:14.243 11:21:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 65026 ']' 00:09:14.243 11:21:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 65026 00:09:14.243 11:21:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:09:14.243 11:21:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:14.243 11:21:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65026 00:09:14.243 11:21:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:14.243 11:21:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:14.243 killing process with pid 65026 00:09:14.243 11:21:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65026' 00:09:14.243 11:21:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 65026 00:09:14.243 11:21:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 65026 00:09:14.243 [2024-11-20 11:21:59.811383] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:09:14.501 11:21:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:14.501 11:21:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:14.502 11:21:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:14.502 11:21:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:09:14.502 11:21:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:09:14.502 11:21:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:14.502 11:21:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:09:14.502 11:21:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:14.502 11:21:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:14.502 11:21:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:14.502 11:21:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:14.502 11:21:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:14.502 11:21:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:14.502 11:21:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:14.502 11:21:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:14.502 11:21:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:14.502 11:21:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:14.502 11:21:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:14.502 11:21:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:14.502 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:14.502 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:14.502 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:14.502 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:14.502 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:14.502 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:14.502 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:14.760 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:09:14.760 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:09:14.760 00:09:14.760 real 0m5.174s 00:09:14.760 user 0m18.328s 00:09:14.760 sys 0m1.260s 00:09:14.760 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:14.760 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:14.760 ************************************ 00:09:14.760 END TEST nvmf_host_management 00:09:14.760 ************************************ 00:09:14.760 11:22:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:14.760 11:22:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:14.760 11:22:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:14.760 11:22:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:14.760 ************************************ 00:09:14.761 START TEST nvmf_lvol 00:09:14.761 ************************************ 00:09:14.761 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:14.761 * Looking for test storage... 00:09:14.761 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:14.761 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:14.761 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:09:14.761 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:15.021 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:15.021 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:15.021 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:15.021 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:15.021 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:09:15.021 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:09:15.021 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:09:15.021 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:09:15.021 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:09:15.021 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:09:15.021 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:09:15.021 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:15.021 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:09:15.021 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:09:15.021 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:15.021 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:15.021 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:09:15.021 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:09:15.021 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:15.021 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:09:15.021 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:09:15.021 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:09:15.021 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:09:15.021 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:15.021 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:09:15.021 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:09:15.021 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:15.021 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:15.021 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:09:15.021 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:15.021 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:15.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.021 --rc genhtml_branch_coverage=1 00:09:15.021 --rc genhtml_function_coverage=1 00:09:15.021 --rc genhtml_legend=1 00:09:15.021 --rc geninfo_all_blocks=1 00:09:15.021 --rc geninfo_unexecuted_blocks=1 00:09:15.021 00:09:15.021 ' 00:09:15.021 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:15.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.021 --rc genhtml_branch_coverage=1 00:09:15.021 --rc genhtml_function_coverage=1 00:09:15.021 --rc genhtml_legend=1 00:09:15.021 --rc geninfo_all_blocks=1 00:09:15.021 --rc geninfo_unexecuted_blocks=1 00:09:15.021 00:09:15.021 ' 00:09:15.021 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:15.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.021 --rc genhtml_branch_coverage=1 00:09:15.021 --rc genhtml_function_coverage=1 00:09:15.021 --rc genhtml_legend=1 00:09:15.021 --rc geninfo_all_blocks=1 00:09:15.021 --rc geninfo_unexecuted_blocks=1 00:09:15.021 00:09:15.021 ' 00:09:15.021 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:15.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.021 --rc genhtml_branch_coverage=1 00:09:15.021 --rc genhtml_function_coverage=1 00:09:15.021 --rc genhtml_legend=1 00:09:15.021 --rc geninfo_all_blocks=1 00:09:15.021 --rc geninfo_unexecuted_blocks=1 00:09:15.021 00:09:15.021 ' 00:09:15.021 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:15.021 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:09:15.021 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:15.021 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:15.021 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:15.021 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:15.021 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:15.021 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:15.021 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:15.021 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:15.021 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:15.021 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:15.021 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:09:15.021 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=806d442c-08ba-4719-b76d-33169283459a 00:09:15.021 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:15.021 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:15.021 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:15.021 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:15.021 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:15.021 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:09:15.021 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:15.021 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:15.021 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:15.021 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.021 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.021 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.021 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:09:15.021 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.021 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:09:15.021 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:15.021 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:15.021 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:15.021 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:15.021 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:15.021 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:15.021 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:15.021 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:15.022 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:15.022 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:15.022 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:15.022 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:15.022 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:09:15.022 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:09:15.022 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:15.022 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:09:15.022 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:15.022 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:15.022 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:15.022 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:15.022 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:15.022 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:15.022 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:15.022 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:15.022 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:15.022 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:15.022 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:15.022 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:15.022 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:15.022 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:15.022 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:15.022 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:15.022 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:15.022 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:15.022 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:15.022 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:15.022 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:15.022 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:15.022 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:15.022 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:15.022 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:15.022 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:15.022 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:15.022 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:15.022 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:15.022 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:15.022 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:15.022 Cannot find device "nvmf_init_br" 00:09:15.022 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:09:15.022 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:15.022 Cannot find device "nvmf_init_br2" 00:09:15.022 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:09:15.022 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:15.022 Cannot find device "nvmf_tgt_br" 00:09:15.022 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:09:15.022 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:15.022 Cannot find device "nvmf_tgt_br2" 00:09:15.022 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:09:15.022 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:15.022 Cannot find device "nvmf_init_br" 00:09:15.022 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:09:15.022 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:15.022 Cannot find device "nvmf_init_br2" 00:09:15.022 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:09:15.022 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:15.022 Cannot find device "nvmf_tgt_br" 00:09:15.022 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:09:15.022 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:15.022 Cannot find device "nvmf_tgt_br2" 00:09:15.022 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:09:15.022 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:15.022 Cannot find device "nvmf_br" 00:09:15.022 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:09:15.022 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:15.022 Cannot find device "nvmf_init_if" 00:09:15.022 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:09:15.022 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:15.022 Cannot find device "nvmf_init_if2" 00:09:15.022 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:09:15.022 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:15.022 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:15.022 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:09:15.022 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:15.022 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:15.022 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:09:15.022 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:15.022 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:15.022 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:15.022 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:15.022 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:15.022 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:15.282 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:15.282 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:15.282 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:15.282 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:15.282 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:15.282 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:15.282 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:15.282 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:15.282 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:15.282 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:15.282 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:15.282 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:15.282 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:15.282 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:15.282 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:15.282 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:15.282 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:15.282 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:15.282 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:15.282 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:15.282 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:15.282 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:15.282 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:15.282 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:15.282 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:15.282 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:15.282 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:15.282 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:15.282 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.115 ms 00:09:15.282 00:09:15.282 --- 10.0.0.3 ping statistics --- 00:09:15.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:15.282 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:09:15.282 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:15.282 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:15.282 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.095 ms 00:09:15.282 00:09:15.282 --- 10.0.0.4 ping statistics --- 00:09:15.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:15.282 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:09:15.282 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:15.282 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:15.282 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:09:15.282 00:09:15.282 --- 10.0.0.1 ping statistics --- 00:09:15.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:15.282 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:09:15.282 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:15.282 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:15.282 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:09:15.282 00:09:15.282 --- 10.0.0.2 ping statistics --- 00:09:15.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:15.282 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:09:15.282 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:15.282 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:09:15.282 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:15.283 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:15.283 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:15.283 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:15.283 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:15.283 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:15.283 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:15.283 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:09:15.283 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:15.283 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:15.283 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:15.283 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=65398 00:09:15.283 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 65398 00:09:15.283 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 65398 ']' 00:09:15.283 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:15.283 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:09:15.283 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:15.283 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:15.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:15.283 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:15.283 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:15.542 [2024-11-20 11:22:00.903846] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:09:15.542 [2024-11-20 11:22:00.903976] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:15.542 [2024-11-20 11:22:01.057528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:15.542 [2024-11-20 11:22:01.097205] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:15.542 [2024-11-20 11:22:01.097276] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:15.542 [2024-11-20 11:22:01.097291] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:15.542 [2024-11-20 11:22:01.097301] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:15.542 [2024-11-20 11:22:01.097310] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:15.542 [2024-11-20 11:22:01.098439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:15.542 [2024-11-20 11:22:01.098563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:15.542 [2024-11-20 11:22:01.098579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.819 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:15.819 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:09:15.819 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:15.819 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:15.819 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:15.819 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:15.819 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:16.083 [2024-11-20 11:22:01.548414] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:16.083 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:16.651 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:09:16.651 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:16.910 11:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:09:16.910 11:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:09:17.169 11:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:09:17.736 11:22:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=2edc2358-079a-482f-90b2-8673104a95b5 00:09:17.736 11:22:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 2edc2358-079a-482f-90b2-8673104a95b5 lvol 20 00:09:17.994 11:22:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=8a420c45-f950-4b09-a475-9181fada7746 00:09:17.994 11:22:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:18.252 11:22:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8a420c45-f950-4b09-a475-9181fada7746 00:09:18.511 11:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:09:18.770 [2024-11-20 11:22:04.303691] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:18.770 11:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:19.337 11:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=65537 00:09:19.337 11:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:09:19.337 11:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:09:20.269 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 8a420c45-f950-4b09-a475-9181fada7746 MY_SNAPSHOT 00:09:20.526 11:22:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=dcafb414-e457-4661-a855-53bb69509c89 00:09:20.526 11:22:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 8a420c45-f950-4b09-a475-9181fada7746 30 00:09:21.092 11:22:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone dcafb414-e457-4661-a855-53bb69509c89 MY_CLONE 00:09:21.351 11:22:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=7f18d097-65ab-4d88-808e-47c9c02704f4 00:09:21.351 11:22:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 7f18d097-65ab-4d88-808e-47c9c02704f4 00:09:22.284 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 65537 00:09:30.400 Initializing NVMe Controllers 00:09:30.400 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:09:30.400 Controller IO queue size 128, less than required. 00:09:30.400 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:30.400 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:09:30.400 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:09:30.400 Initialization complete. Launching workers. 00:09:30.400 ======================================================== 00:09:30.400 Latency(us) 00:09:30.400 Device Information : IOPS MiB/s Average min max 00:09:30.400 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10091.30 39.42 12687.94 2178.83 62738.14 00:09:30.400 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10222.70 39.93 12521.48 2732.02 56565.05 00:09:30.400 ======================================================== 00:09:30.400 Total : 20314.00 79.35 12604.17 2178.83 62738.14 00:09:30.400 00:09:30.400 11:22:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:30.400 11:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 8a420c45-f950-4b09-a475-9181fada7746 00:09:30.400 11:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2edc2358-079a-482f-90b2-8673104a95b5 00:09:30.400 11:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:09:30.400 11:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:09:30.400 11:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:09:30.400 11:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:30.400 11:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:09:30.400 11:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:30.400 11:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:09:30.400 11:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:30.400 11:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:30.400 rmmod nvme_tcp 00:09:30.400 rmmod nvme_fabrics 00:09:30.400 rmmod nvme_keyring 00:09:30.400 11:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:30.400 11:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:09:30.400 11:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:09:30.400 11:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 65398 ']' 00:09:30.400 11:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 65398 00:09:30.400 11:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 65398 ']' 00:09:30.400 11:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 65398 00:09:30.660 11:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:09:30.660 11:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:30.660 11:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65398 00:09:30.660 killing process with pid 65398 00:09:30.660 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:30.660 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:30.660 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65398' 00:09:30.660 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 65398 00:09:30.660 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 65398 00:09:30.660 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:30.660 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:30.660 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:30.660 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:09:30.660 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:30.660 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:09:30.660 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:09:30.660 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:30.660 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:30.660 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:30.660 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:30.660 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:30.660 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:30.919 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:30.919 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:30.919 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:30.919 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:30.919 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:30.919 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:30.919 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:30.919 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:30.919 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:30.919 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:30.919 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:30.919 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:30.919 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:30.919 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:09:30.919 00:09:30.919 real 0m16.269s 00:09:30.919 user 1m7.439s 00:09:30.919 sys 0m3.862s 00:09:30.919 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:30.919 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:30.919 ************************************ 00:09:30.919 END TEST nvmf_lvol 00:09:30.919 ************************************ 00:09:30.919 11:22:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:30.919 11:22:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:30.919 11:22:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:30.919 11:22:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:30.919 ************************************ 00:09:30.919 START TEST nvmf_lvs_grow 00:09:30.919 ************************************ 00:09:30.919 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:31.179 * Looking for test storage... 00:09:31.179 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:31.179 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:31.179 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:09:31.179 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:31.179 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:31.179 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:31.179 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:31.179 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:31.179 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:09:31.179 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:09:31.179 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:09:31.179 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:09:31.179 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:09:31.179 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:09:31.179 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:09:31.179 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:31.179 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:09:31.179 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:09:31.179 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:31.179 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:31.179 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:09:31.179 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:09:31.179 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:31.179 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:09:31.179 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:09:31.179 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:09:31.179 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:09:31.179 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:31.179 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:09:31.179 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:09:31.179 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:31.179 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:31.179 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:09:31.179 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:31.179 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:31.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.179 --rc genhtml_branch_coverage=1 00:09:31.179 --rc genhtml_function_coverage=1 00:09:31.179 --rc genhtml_legend=1 00:09:31.179 --rc geninfo_all_blocks=1 00:09:31.179 --rc geninfo_unexecuted_blocks=1 00:09:31.179 00:09:31.179 ' 00:09:31.179 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:31.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.179 --rc genhtml_branch_coverage=1 00:09:31.179 --rc genhtml_function_coverage=1 00:09:31.179 --rc genhtml_legend=1 00:09:31.179 --rc geninfo_all_blocks=1 00:09:31.179 --rc geninfo_unexecuted_blocks=1 00:09:31.179 00:09:31.179 ' 00:09:31.179 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:31.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.179 --rc genhtml_branch_coverage=1 00:09:31.179 --rc genhtml_function_coverage=1 00:09:31.179 --rc genhtml_legend=1 00:09:31.179 --rc geninfo_all_blocks=1 00:09:31.179 --rc geninfo_unexecuted_blocks=1 00:09:31.179 00:09:31.179 ' 00:09:31.179 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:31.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.179 --rc genhtml_branch_coverage=1 00:09:31.179 --rc genhtml_function_coverage=1 00:09:31.179 --rc genhtml_legend=1 00:09:31.179 --rc geninfo_all_blocks=1 00:09:31.179 --rc geninfo_unexecuted_blocks=1 00:09:31.179 00:09:31.179 ' 00:09:31.179 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:31.179 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:09:31.179 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:31.179 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:31.179 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:31.179 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:31.179 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:31.179 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:31.179 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:31.179 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:31.179 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:31.179 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:31.179 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:09:31.179 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=806d442c-08ba-4719-b76d-33169283459a 00:09:31.179 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:31.179 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:31.179 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:31.179 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:31.179 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:31.179 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:09:31.179 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:31.179 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:31.179 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:31.179 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.179 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.180 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.180 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:09:31.180 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.180 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:09:31.180 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:31.180 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:31.180 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:31.180 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:31.180 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:31.180 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:31.180 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:31.180 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:31.180 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:31.180 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:31.180 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:31.180 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:31.180 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:09:31.180 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:31.180 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:31.180 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:31.180 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:31.180 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:31.180 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:31.180 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:31.180 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:31.180 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:31.180 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:31.180 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:31.180 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:31.180 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:31.180 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:31.180 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:31.180 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:31.180 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:31.180 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:31.180 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:31.180 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:31.180 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:31.180 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:31.180 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:31.180 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:31.180 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:31.180 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:31.180 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:31.180 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:31.180 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:31.180 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:31.180 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:31.180 Cannot find device "nvmf_init_br" 00:09:31.180 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:09:31.180 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:31.180 Cannot find device "nvmf_init_br2" 00:09:31.180 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:09:31.180 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:31.180 Cannot find device "nvmf_tgt_br" 00:09:31.180 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:09:31.180 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:31.180 Cannot find device "nvmf_tgt_br2" 00:09:31.180 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:09:31.180 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:31.180 Cannot find device "nvmf_init_br" 00:09:31.180 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:09:31.180 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:31.439 Cannot find device "nvmf_init_br2" 00:09:31.439 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:09:31.439 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:31.439 Cannot find device "nvmf_tgt_br" 00:09:31.439 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:09:31.439 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:31.439 Cannot find device "nvmf_tgt_br2" 00:09:31.439 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:09:31.439 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:31.439 Cannot find device "nvmf_br" 00:09:31.439 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:09:31.439 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:31.439 Cannot find device "nvmf_init_if" 00:09:31.439 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:09:31.439 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:31.439 Cannot find device "nvmf_init_if2" 00:09:31.439 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:09:31.439 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:31.439 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:31.439 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:09:31.439 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:31.439 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:31.439 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:09:31.439 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:31.439 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:31.439 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:31.439 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:31.439 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:31.439 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:31.439 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:31.439 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:31.439 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:31.439 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:31.439 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:31.439 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:31.439 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:31.439 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:31.439 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:31.439 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:31.439 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:31.439 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:31.439 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:31.439 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:31.439 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:31.439 11:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:31.439 11:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:31.439 11:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:31.698 11:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:31.698 11:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:31.698 11:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:31.698 11:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:31.698 11:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:31.698 11:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:31.698 11:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:31.698 11:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:31.698 11:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:31.698 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:31.698 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:09:31.698 00:09:31.698 --- 10.0.0.3 ping statistics --- 00:09:31.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.698 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:09:31.698 11:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:31.698 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:31.698 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.071 ms 00:09:31.698 00:09:31.698 --- 10.0.0.4 ping statistics --- 00:09:31.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.698 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:09:31.698 11:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:31.698 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:31.698 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:09:31.698 00:09:31.698 --- 10.0.0.1 ping statistics --- 00:09:31.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.698 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:09:31.698 11:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:31.698 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:31.698 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:09:31.698 00:09:31.698 --- 10.0.0.2 ping statistics --- 00:09:31.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.698 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:09:31.698 11:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:31.698 11:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:09:31.698 11:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:31.698 11:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:31.698 11:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:31.698 11:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:31.698 11:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:31.698 11:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:31.698 11:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:31.698 11:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:09:31.698 11:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:31.698 11:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:31.698 11:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:31.698 11:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=65963 00:09:31.698 11:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:31.698 11:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 65963 00:09:31.698 11:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 65963 ']' 00:09:31.698 11:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.698 11:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:31.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.698 11:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.698 11:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:31.698 11:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:31.698 [2024-11-20 11:22:17.188778] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:09:31.698 [2024-11-20 11:22:17.188878] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:31.957 [2024-11-20 11:22:17.345056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.957 [2024-11-20 11:22:17.382389] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:31.957 [2024-11-20 11:22:17.382446] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:31.957 [2024-11-20 11:22:17.382460] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:31.957 [2024-11-20 11:22:17.382471] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:31.957 [2024-11-20 11:22:17.382480] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:31.957 [2024-11-20 11:22:17.382839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.893 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:32.893 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:09:32.893 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:32.893 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:32.893 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:32.893 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:32.893 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:33.152 [2024-11-20 11:22:18.549039] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:33.152 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:09:33.152 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:33.152 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:33.152 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:33.152 ************************************ 00:09:33.152 START TEST lvs_grow_clean 00:09:33.152 ************************************ 00:09:33.152 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:09:33.152 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:33.152 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:33.152 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:33.152 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:33.152 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:33.152 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:33.152 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:33.152 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:33.152 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:33.411 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:33.411 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:33.978 11:22:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=1acb1262-c4ed-4872-829a-b1f0b7b16847 00:09:33.978 11:22:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:33.979 11:22:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1acb1262-c4ed-4872-829a-b1f0b7b16847 00:09:34.237 11:22:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:34.237 11:22:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:34.237 11:22:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 1acb1262-c4ed-4872-829a-b1f0b7b16847 lvol 150 00:09:34.495 11:22:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=6d5a6074-feae-428a-aac2-7bc62b385def 00:09:34.495 11:22:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:34.495 11:22:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:34.753 [2024-11-20 11:22:20.144575] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:34.753 [2024-11-20 11:22:20.144660] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:34.753 true 00:09:34.753 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1acb1262-c4ed-4872-829a-b1f0b7b16847 00:09:34.753 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:35.012 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:35.012 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:35.272 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6d5a6074-feae-428a-aac2-7bc62b385def 00:09:35.538 11:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:09:35.796 [2024-11-20 11:22:21.369255] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:36.055 11:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:36.314 11:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:36.314 11:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=66131 00:09:36.314 11:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:36.314 11:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 66131 /var/tmp/bdevperf.sock 00:09:36.314 11:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 66131 ']' 00:09:36.314 11:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:36.314 11:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:36.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:36.314 11:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:36.314 11:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:36.314 11:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:36.314 [2024-11-20 11:22:21.726548] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:09:36.314 [2024-11-20 11:22:21.726647] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66131 ] 00:09:36.314 [2024-11-20 11:22:21.874791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.573 [2024-11-20 11:22:21.933866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:36.573 11:22:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:36.573 11:22:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:09:36.573 11:22:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:36.831 Nvme0n1 00:09:36.831 11:22:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:37.090 [ 00:09:37.090 { 00:09:37.090 "aliases": [ 00:09:37.090 "6d5a6074-feae-428a-aac2-7bc62b385def" 00:09:37.090 ], 00:09:37.090 "assigned_rate_limits": { 00:09:37.090 "r_mbytes_per_sec": 0, 00:09:37.090 "rw_ios_per_sec": 0, 00:09:37.090 "rw_mbytes_per_sec": 0, 00:09:37.090 "w_mbytes_per_sec": 0 00:09:37.090 }, 00:09:37.090 "block_size": 4096, 00:09:37.090 "claimed": false, 00:09:37.090 "driver_specific": { 00:09:37.090 "mp_policy": "active_passive", 00:09:37.090 "nvme": [ 00:09:37.090 { 00:09:37.090 "ctrlr_data": { 00:09:37.090 "ana_reporting": false, 00:09:37.090 "cntlid": 1, 00:09:37.090 "firmware_revision": "25.01", 00:09:37.090 "model_number": "SPDK bdev Controller", 00:09:37.090 "multi_ctrlr": true, 00:09:37.090 "oacs": { 00:09:37.090 "firmware": 0, 00:09:37.090 "format": 0, 00:09:37.090 "ns_manage": 0, 00:09:37.090 "security": 0 00:09:37.090 }, 00:09:37.090 "serial_number": "SPDK0", 00:09:37.090 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:37.090 "vendor_id": "0x8086" 00:09:37.090 }, 00:09:37.090 "ns_data": { 00:09:37.090 "can_share": true, 00:09:37.090 "id": 1 00:09:37.090 }, 00:09:37.090 "trid": { 00:09:37.090 "adrfam": "IPv4", 00:09:37.090 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:37.090 "traddr": "10.0.0.3", 00:09:37.090 "trsvcid": "4420", 00:09:37.090 "trtype": "TCP" 00:09:37.090 }, 00:09:37.090 "vs": { 00:09:37.090 "nvme_version": "1.3" 00:09:37.090 } 00:09:37.090 } 00:09:37.090 ] 00:09:37.090 }, 00:09:37.090 "memory_domains": [ 00:09:37.090 { 00:09:37.090 "dma_device_id": "system", 00:09:37.090 "dma_device_type": 1 00:09:37.090 } 00:09:37.090 ], 00:09:37.090 "name": "Nvme0n1", 00:09:37.090 "num_blocks": 38912, 00:09:37.090 "numa_id": -1, 00:09:37.090 "product_name": "NVMe disk", 00:09:37.090 "supported_io_types": { 00:09:37.090 "abort": true, 00:09:37.090 "compare": true, 00:09:37.090 "compare_and_write": true, 00:09:37.090 "copy": true, 00:09:37.090 "flush": true, 00:09:37.090 "get_zone_info": false, 00:09:37.090 "nvme_admin": true, 00:09:37.090 "nvme_io": true, 00:09:37.090 "nvme_io_md": false, 00:09:37.090 "nvme_iov_md": false, 00:09:37.090 "read": true, 00:09:37.090 "reset": true, 00:09:37.090 "seek_data": false, 00:09:37.091 "seek_hole": false, 00:09:37.091 "unmap": true, 00:09:37.091 "write": true, 00:09:37.091 "write_zeroes": true, 00:09:37.091 "zcopy": false, 00:09:37.091 "zone_append": false, 00:09:37.091 "zone_management": false 00:09:37.091 }, 00:09:37.091 "uuid": "6d5a6074-feae-428a-aac2-7bc62b385def", 00:09:37.091 "zoned": false 00:09:37.091 } 00:09:37.091 ] 00:09:37.091 11:22:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=66166 00:09:37.091 11:22:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:37.091 11:22:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:37.348 Running I/O for 10 seconds... 00:09:38.283 Latency(us) 00:09:38.283 [2024-11-20T11:22:23.872Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:38.283 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:38.283 Nvme0n1 : 1.00 8006.00 31.27 0.00 0.00 0.00 0.00 0.00 00:09:38.283 [2024-11-20T11:22:23.872Z] =================================================================================================================== 00:09:38.283 [2024-11-20T11:22:23.872Z] Total : 8006.00 31.27 0.00 0.00 0.00 0.00 0.00 00:09:38.283 00:09:39.218 11:22:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 1acb1262-c4ed-4872-829a-b1f0b7b16847 00:09:39.218 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:39.218 Nvme0n1 : 2.00 7996.50 31.24 0.00 0.00 0.00 0.00 0.00 00:09:39.218 [2024-11-20T11:22:24.807Z] =================================================================================================================== 00:09:39.218 [2024-11-20T11:22:24.807Z] Total : 7996.50 31.24 0.00 0.00 0.00 0.00 0.00 00:09:39.218 00:09:39.477 true 00:09:39.477 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1acb1262-c4ed-4872-829a-b1f0b7b16847 00:09:39.477 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:40.044 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:40.044 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:40.044 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 66166 00:09:40.302 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:40.302 Nvme0n1 : 3.00 7917.67 30.93 0.00 0.00 0.00 0.00 0.00 00:09:40.302 [2024-11-20T11:22:25.891Z] =================================================================================================================== 00:09:40.302 [2024-11-20T11:22:25.891Z] Total : 7917.67 30.93 0.00 0.00 0.00 0.00 0.00 00:09:40.302 00:09:41.236 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:41.236 Nvme0n1 : 4.00 7905.50 30.88 0.00 0.00 0.00 0.00 0.00 00:09:41.236 [2024-11-20T11:22:26.825Z] =================================================================================================================== 00:09:41.236 [2024-11-20T11:22:26.825Z] Total : 7905.50 30.88 0.00 0.00 0.00 0.00 0.00 00:09:41.236 00:09:42.612 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:42.612 Nvme0n1 : 5.00 7775.80 30.37 0.00 0.00 0.00 0.00 0.00 00:09:42.612 [2024-11-20T11:22:28.201Z] =================================================================================================================== 00:09:42.612 [2024-11-20T11:22:28.201Z] Total : 7775.80 30.37 0.00 0.00 0.00 0.00 0.00 00:09:42.612 00:09:43.549 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:43.549 Nvme0n1 : 6.00 7759.33 30.31 0.00 0.00 0.00 0.00 0.00 00:09:43.549 [2024-11-20T11:22:29.138Z] =================================================================================================================== 00:09:43.549 [2024-11-20T11:22:29.138Z] Total : 7759.33 30.31 0.00 0.00 0.00 0.00 0.00 00:09:43.549 00:09:44.484 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:44.484 Nvme0n1 : 7.00 7749.57 30.27 0.00 0.00 0.00 0.00 0.00 00:09:44.484 [2024-11-20T11:22:30.073Z] =================================================================================================================== 00:09:44.484 [2024-11-20T11:22:30.073Z] Total : 7749.57 30.27 0.00 0.00 0.00 0.00 0.00 00:09:44.484 00:09:45.421 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:45.421 Nvme0n1 : 8.00 7735.00 30.21 0.00 0.00 0.00 0.00 0.00 00:09:45.421 [2024-11-20T11:22:31.010Z] =================================================================================================================== 00:09:45.421 [2024-11-20T11:22:31.010Z] Total : 7735.00 30.21 0.00 0.00 0.00 0.00 0.00 00:09:45.421 00:09:46.357 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:46.357 Nvme0n1 : 9.00 7722.44 30.17 0.00 0.00 0.00 0.00 0.00 00:09:46.357 [2024-11-20T11:22:31.946Z] =================================================================================================================== 00:09:46.357 [2024-11-20T11:22:31.946Z] Total : 7722.44 30.17 0.00 0.00 0.00 0.00 0.00 00:09:46.357 00:09:47.308 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:47.308 Nvme0n1 : 10.00 7705.40 30.10 0.00 0.00 0.00 0.00 0.00 00:09:47.308 [2024-11-20T11:22:32.897Z] =================================================================================================================== 00:09:47.308 [2024-11-20T11:22:32.897Z] Total : 7705.40 30.10 0.00 0.00 0.00 0.00 0.00 00:09:47.308 00:09:47.308 00:09:47.308 Latency(us) 00:09:47.308 [2024-11-20T11:22:32.897Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:47.308 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:47.308 Nvme0n1 : 10.01 7707.51 30.11 0.00 0.00 16602.01 7685.59 92465.34 00:09:47.308 [2024-11-20T11:22:32.897Z] =================================================================================================================== 00:09:47.308 [2024-11-20T11:22:32.897Z] Total : 7707.51 30.11 0.00 0.00 16602.01 7685.59 92465.34 00:09:47.308 { 00:09:47.308 "results": [ 00:09:47.308 { 00:09:47.308 "job": "Nvme0n1", 00:09:47.308 "core_mask": "0x2", 00:09:47.308 "workload": "randwrite", 00:09:47.308 "status": "finished", 00:09:47.308 "queue_depth": 128, 00:09:47.308 "io_size": 4096, 00:09:47.308 "runtime": 10.013873, 00:09:47.308 "iops": 7707.507375018637, 00:09:47.308 "mibps": 30.10745068366655, 00:09:47.308 "io_failed": 0, 00:09:47.308 "io_timeout": 0, 00:09:47.308 "avg_latency_us": 16602.00549218965, 00:09:47.308 "min_latency_us": 7685.585454545455, 00:09:47.308 "max_latency_us": 92465.33818181818 00:09:47.308 } 00:09:47.308 ], 00:09:47.308 "core_count": 1 00:09:47.308 } 00:09:47.308 11:22:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 66131 00:09:47.308 11:22:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 66131 ']' 00:09:47.308 11:22:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 66131 00:09:47.308 11:22:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:09:47.308 11:22:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:47.308 11:22:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66131 00:09:47.308 killing process with pid 66131 00:09:47.308 Received shutdown signal, test time was about 10.000000 seconds 00:09:47.308 00:09:47.308 Latency(us) 00:09:47.308 [2024-11-20T11:22:32.897Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:47.308 [2024-11-20T11:22:32.897Z] =================================================================================================================== 00:09:47.308 [2024-11-20T11:22:32.897Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:47.308 11:22:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:47.308 11:22:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:47.308 11:22:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66131' 00:09:47.308 11:22:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 66131 00:09:47.308 11:22:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 66131 00:09:47.567 11:22:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:47.825 11:22:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:48.085 11:22:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1acb1262-c4ed-4872-829a-b1f0b7b16847 00:09:48.085 11:22:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:48.344 11:22:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:48.344 11:22:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:48.344 11:22:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:48.603 [2024-11-20 11:22:34.187581] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:48.860 11:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1acb1262-c4ed-4872-829a-b1f0b7b16847 00:09:48.860 11:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:09:48.860 11:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1acb1262-c4ed-4872-829a-b1f0b7b16847 00:09:48.861 11:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:48.861 11:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:48.861 11:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:48.861 11:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:48.861 11:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:48.861 11:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:48.861 11:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:48.861 11:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:48.861 11:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1acb1262-c4ed-4872-829a-b1f0b7b16847 00:09:49.119 2024/11/20 11:22:34 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:1acb1262-c4ed-4872-829a-b1f0b7b16847], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:09:49.119 request: 00:09:49.119 { 00:09:49.119 "method": "bdev_lvol_get_lvstores", 00:09:49.119 "params": { 00:09:49.119 "uuid": "1acb1262-c4ed-4872-829a-b1f0b7b16847" 00:09:49.119 } 00:09:49.119 } 00:09:49.119 Got JSON-RPC error response 00:09:49.119 GoRPCClient: error on JSON-RPC call 00:09:49.119 11:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:09:49.119 11:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:49.119 11:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:49.119 11:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:49.119 11:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:49.378 aio_bdev 00:09:49.378 11:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 6d5a6074-feae-428a-aac2-7bc62b385def 00:09:49.378 11:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=6d5a6074-feae-428a-aac2-7bc62b385def 00:09:49.378 11:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:49.378 11:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:09:49.378 11:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:49.378 11:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:49.378 11:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:49.637 11:22:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6d5a6074-feae-428a-aac2-7bc62b385def -t 2000 00:09:49.896 [ 00:09:49.896 { 00:09:49.896 "aliases": [ 00:09:49.896 "lvs/lvol" 00:09:49.896 ], 00:09:49.896 "assigned_rate_limits": { 00:09:49.896 "r_mbytes_per_sec": 0, 00:09:49.896 "rw_ios_per_sec": 0, 00:09:49.896 "rw_mbytes_per_sec": 0, 00:09:49.896 "w_mbytes_per_sec": 0 00:09:49.896 }, 00:09:49.896 "block_size": 4096, 00:09:49.896 "claimed": false, 00:09:49.896 "driver_specific": { 00:09:49.896 "lvol": { 00:09:49.896 "base_bdev": "aio_bdev", 00:09:49.896 "clone": false, 00:09:49.896 "esnap_clone": false, 00:09:49.896 "lvol_store_uuid": "1acb1262-c4ed-4872-829a-b1f0b7b16847", 00:09:49.896 "num_allocated_clusters": 38, 00:09:49.896 "snapshot": false, 00:09:49.896 "thin_provision": false 00:09:49.896 } 00:09:49.896 }, 00:09:49.896 "name": "6d5a6074-feae-428a-aac2-7bc62b385def", 00:09:49.896 "num_blocks": 38912, 00:09:49.896 "product_name": "Logical Volume", 00:09:49.896 "supported_io_types": { 00:09:49.896 "abort": false, 00:09:49.896 "compare": false, 00:09:49.896 "compare_and_write": false, 00:09:49.896 "copy": false, 00:09:49.896 "flush": false, 00:09:49.896 "get_zone_info": false, 00:09:49.896 "nvme_admin": false, 00:09:49.896 "nvme_io": false, 00:09:49.896 "nvme_io_md": false, 00:09:49.896 "nvme_iov_md": false, 00:09:49.896 "read": true, 00:09:49.896 "reset": true, 00:09:49.896 "seek_data": true, 00:09:49.896 "seek_hole": true, 00:09:49.896 "unmap": true, 00:09:49.896 "write": true, 00:09:49.896 "write_zeroes": true, 00:09:49.896 "zcopy": false, 00:09:49.896 "zone_append": false, 00:09:49.896 "zone_management": false 00:09:49.896 }, 00:09:49.896 "uuid": "6d5a6074-feae-428a-aac2-7bc62b385def", 00:09:49.896 "zoned": false 00:09:49.896 } 00:09:49.896 ] 00:09:49.896 11:22:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:09:49.896 11:22:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1acb1262-c4ed-4872-829a-b1f0b7b16847 00:09:49.896 11:22:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:50.464 11:22:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:50.464 11:22:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1acb1262-c4ed-4872-829a-b1f0b7b16847 00:09:50.464 11:22:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:50.722 11:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:50.722 11:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 6d5a6074-feae-428a-aac2-7bc62b385def 00:09:50.982 11:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1acb1262-c4ed-4872-829a-b1f0b7b16847 00:09:51.241 11:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:51.807 11:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:52.065 ************************************ 00:09:52.065 END TEST lvs_grow_clean 00:09:52.065 ************************************ 00:09:52.065 00:09:52.065 real 0m18.981s 00:09:52.065 user 0m18.411s 00:09:52.065 sys 0m2.189s 00:09:52.065 11:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:52.065 11:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:52.065 11:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:52.065 11:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:52.065 11:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:52.065 11:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:52.065 ************************************ 00:09:52.065 START TEST lvs_grow_dirty 00:09:52.065 ************************************ 00:09:52.065 11:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:09:52.065 11:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:52.065 11:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:52.065 11:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:52.065 11:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:52.065 11:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:52.065 11:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:52.065 11:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:52.065 11:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:52.065 11:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:52.631 11:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:52.631 11:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:52.889 11:22:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=dbdf1da0-c6ca-48ad-9882-a26c79d47c54 00:09:52.889 11:22:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:52.889 11:22:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dbdf1da0-c6ca-48ad-9882-a26c79d47c54 00:09:53.147 11:22:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:53.147 11:22:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:53.148 11:22:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u dbdf1da0-c6ca-48ad-9882-a26c79d47c54 lvol 150 00:09:53.406 11:22:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=497d12eb-5265-4971-adc3-2ae2b89f2e2b 00:09:53.406 11:22:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:53.406 11:22:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:53.972 [2024-11-20 11:22:39.258653] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:53.972 [2024-11-20 11:22:39.258750] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:53.972 true 00:09:53.972 11:22:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dbdf1da0-c6ca-48ad-9882-a26c79d47c54 00:09:53.972 11:22:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:54.231 11:22:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:54.231 11:22:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:54.489 11:22:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 497d12eb-5265-4971-adc3-2ae2b89f2e2b 00:09:54.748 11:22:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:09:55.007 [2024-11-20 11:22:40.543303] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:55.007 11:22:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:55.573 11:22:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=66581 00:09:55.573 11:22:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:55.573 11:22:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:55.573 11:22:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 66581 /var/tmp/bdevperf.sock 00:09:55.573 11:22:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 66581 ']' 00:09:55.573 11:22:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:55.573 11:22:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:55.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:55.573 11:22:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:55.573 11:22:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:55.573 11:22:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:55.573 [2024-11-20 11:22:40.949105] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:09:55.573 [2024-11-20 11:22:40.949949] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66581 ] 00:09:55.573 [2024-11-20 11:22:41.105203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:55.573 [2024-11-20 11:22:41.145304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:56.541 11:22:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:56.541 11:22:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:56.541 11:22:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:56.799 Nvme0n1 00:09:56.799 11:22:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:57.366 [ 00:09:57.366 { 00:09:57.366 "aliases": [ 00:09:57.366 "497d12eb-5265-4971-adc3-2ae2b89f2e2b" 00:09:57.366 ], 00:09:57.366 "assigned_rate_limits": { 00:09:57.366 "r_mbytes_per_sec": 0, 00:09:57.366 "rw_ios_per_sec": 0, 00:09:57.366 "rw_mbytes_per_sec": 0, 00:09:57.366 "w_mbytes_per_sec": 0 00:09:57.366 }, 00:09:57.366 "block_size": 4096, 00:09:57.366 "claimed": false, 00:09:57.366 "driver_specific": { 00:09:57.366 "mp_policy": "active_passive", 00:09:57.366 "nvme": [ 00:09:57.366 { 00:09:57.366 "ctrlr_data": { 00:09:57.366 "ana_reporting": false, 00:09:57.366 "cntlid": 1, 00:09:57.366 "firmware_revision": "25.01", 00:09:57.366 "model_number": "SPDK bdev Controller", 00:09:57.366 "multi_ctrlr": true, 00:09:57.366 "oacs": { 00:09:57.366 "firmware": 0, 00:09:57.366 "format": 0, 00:09:57.366 "ns_manage": 0, 00:09:57.366 "security": 0 00:09:57.366 }, 00:09:57.366 "serial_number": "SPDK0", 00:09:57.366 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:57.366 "vendor_id": "0x8086" 00:09:57.366 }, 00:09:57.366 "ns_data": { 00:09:57.366 "can_share": true, 00:09:57.366 "id": 1 00:09:57.366 }, 00:09:57.366 "trid": { 00:09:57.366 "adrfam": "IPv4", 00:09:57.366 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:57.366 "traddr": "10.0.0.3", 00:09:57.366 "trsvcid": "4420", 00:09:57.366 "trtype": "TCP" 00:09:57.366 }, 00:09:57.366 "vs": { 00:09:57.366 "nvme_version": "1.3" 00:09:57.366 } 00:09:57.366 } 00:09:57.366 ] 00:09:57.366 }, 00:09:57.366 "memory_domains": [ 00:09:57.366 { 00:09:57.366 "dma_device_id": "system", 00:09:57.366 "dma_device_type": 1 00:09:57.366 } 00:09:57.366 ], 00:09:57.366 "name": "Nvme0n1", 00:09:57.366 "num_blocks": 38912, 00:09:57.366 "numa_id": -1, 00:09:57.366 "product_name": "NVMe disk", 00:09:57.366 "supported_io_types": { 00:09:57.366 "abort": true, 00:09:57.366 "compare": true, 00:09:57.366 "compare_and_write": true, 00:09:57.366 "copy": true, 00:09:57.366 "flush": true, 00:09:57.366 "get_zone_info": false, 00:09:57.366 "nvme_admin": true, 00:09:57.366 "nvme_io": true, 00:09:57.366 "nvme_io_md": false, 00:09:57.366 "nvme_iov_md": false, 00:09:57.366 "read": true, 00:09:57.366 "reset": true, 00:09:57.366 "seek_data": false, 00:09:57.366 "seek_hole": false, 00:09:57.366 "unmap": true, 00:09:57.366 "write": true, 00:09:57.366 "write_zeroes": true, 00:09:57.366 "zcopy": false, 00:09:57.366 "zone_append": false, 00:09:57.366 "zone_management": false 00:09:57.366 }, 00:09:57.366 "uuid": "497d12eb-5265-4971-adc3-2ae2b89f2e2b", 00:09:57.366 "zoned": false 00:09:57.366 } 00:09:57.366 ] 00:09:57.366 11:22:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=66629 00:09:57.366 11:22:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:57.366 11:22:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:57.366 Running I/O for 10 seconds... 00:09:58.301 Latency(us) 00:09:58.301 [2024-11-20T11:22:43.890Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:58.301 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:58.301 Nvme0n1 : 1.00 7956.00 31.08 0.00 0.00 0.00 0.00 0.00 00:09:58.301 [2024-11-20T11:22:43.890Z] =================================================================================================================== 00:09:58.301 [2024-11-20T11:22:43.890Z] Total : 7956.00 31.08 0.00 0.00 0.00 0.00 0.00 00:09:58.301 00:09:59.235 11:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u dbdf1da0-c6ca-48ad-9882-a26c79d47c54 00:09:59.494 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:59.494 Nvme0n1 : 2.00 7696.00 30.06 0.00 0.00 0.00 0.00 0.00 00:09:59.494 [2024-11-20T11:22:45.083Z] =================================================================================================================== 00:09:59.494 [2024-11-20T11:22:45.083Z] Total : 7696.00 30.06 0.00 0.00 0.00 0.00 0.00 00:09:59.494 00:09:59.494 true 00:09:59.494 11:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dbdf1da0-c6ca-48ad-9882-a26c79d47c54 00:09:59.494 11:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:00.061 11:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:00.061 11:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:00.061 11:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 66629 00:10:00.320 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:00.320 Nvme0n1 : 3.00 7716.00 30.14 0.00 0.00 0.00 0.00 0.00 00:10:00.320 [2024-11-20T11:22:45.909Z] =================================================================================================================== 00:10:00.320 [2024-11-20T11:22:45.909Z] Total : 7716.00 30.14 0.00 0.00 0.00 0.00 0.00 00:10:00.320 00:10:01.255 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:01.255 Nvme0n1 : 4.00 7737.25 30.22 0.00 0.00 0.00 0.00 0.00 00:10:01.255 [2024-11-20T11:22:46.844Z] =================================================================================================================== 00:10:01.255 [2024-11-20T11:22:46.844Z] Total : 7737.25 30.22 0.00 0.00 0.00 0.00 0.00 00:10:01.255 00:10:02.630 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:02.630 Nvme0n1 : 5.00 7595.00 29.67 0.00 0.00 0.00 0.00 0.00 00:10:02.630 [2024-11-20T11:22:48.219Z] =================================================================================================================== 00:10:02.630 [2024-11-20T11:22:48.220Z] Total : 7595.00 29.67 0.00 0.00 0.00 0.00 0.00 00:10:02.631 00:10:03.567 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:03.567 Nvme0n1 : 6.00 7573.50 29.58 0.00 0.00 0.00 0.00 0.00 00:10:03.567 [2024-11-20T11:22:49.156Z] =================================================================================================================== 00:10:03.567 [2024-11-20T11:22:49.156Z] Total : 7573.50 29.58 0.00 0.00 0.00 0.00 0.00 00:10:03.567 00:10:04.499 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:04.499 Nvme0n1 : 7.00 7537.29 29.44 0.00 0.00 0.00 0.00 0.00 00:10:04.499 [2024-11-20T11:22:50.088Z] =================================================================================================================== 00:10:04.499 [2024-11-20T11:22:50.088Z] Total : 7537.29 29.44 0.00 0.00 0.00 0.00 0.00 00:10:04.499 00:10:05.433 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:05.433 Nvme0n1 : 8.00 7499.12 29.29 0.00 0.00 0.00 0.00 0.00 00:10:05.433 [2024-11-20T11:22:51.022Z] =================================================================================================================== 00:10:05.433 [2024-11-20T11:22:51.022Z] Total : 7499.12 29.29 0.00 0.00 0.00 0.00 0.00 00:10:05.433 00:10:06.367 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:06.367 Nvme0n1 : 9.00 7470.00 29.18 0.00 0.00 0.00 0.00 0.00 00:10:06.367 [2024-11-20T11:22:51.956Z] =================================================================================================================== 00:10:06.367 [2024-11-20T11:22:51.956Z] Total : 7470.00 29.18 0.00 0.00 0.00 0.00 0.00 00:10:06.367 00:10:07.302 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:07.302 Nvme0n1 : 10.00 7469.30 29.18 0.00 0.00 0.00 0.00 0.00 00:10:07.302 [2024-11-20T11:22:52.891Z] =================================================================================================================== 00:10:07.302 [2024-11-20T11:22:52.891Z] Total : 7469.30 29.18 0.00 0.00 0.00 0.00 0.00 00:10:07.302 00:10:07.302 00:10:07.302 Latency(us) 00:10:07.302 [2024-11-20T11:22:52.891Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:07.302 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:07.302 Nvme0n1 : 10.01 7472.21 29.19 0.00 0.00 17124.82 4766.25 109147.23 00:10:07.302 [2024-11-20T11:22:52.891Z] =================================================================================================================== 00:10:07.302 [2024-11-20T11:22:52.891Z] Total : 7472.21 29.19 0.00 0.00 17124.82 4766.25 109147.23 00:10:07.302 { 00:10:07.302 "results": [ 00:10:07.302 { 00:10:07.302 "job": "Nvme0n1", 00:10:07.302 "core_mask": "0x2", 00:10:07.302 "workload": "randwrite", 00:10:07.302 "status": "finished", 00:10:07.302 "queue_depth": 128, 00:10:07.302 "io_size": 4096, 00:10:07.302 "runtime": 10.013238, 00:10:07.302 "iops": 7472.2082906648175, 00:10:07.302 "mibps": 29.188313635409443, 00:10:07.302 "io_failed": 0, 00:10:07.302 "io_timeout": 0, 00:10:07.302 "avg_latency_us": 17124.815565877834, 00:10:07.302 "min_latency_us": 4766.254545454545, 00:10:07.302 "max_latency_us": 109147.2290909091 00:10:07.302 } 00:10:07.302 ], 00:10:07.302 "core_count": 1 00:10:07.302 } 00:10:07.302 11:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 66581 00:10:07.302 11:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 66581 ']' 00:10:07.302 11:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 66581 00:10:07.302 11:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:10:07.302 11:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:07.302 11:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66581 00:10:07.560 11:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:07.560 killing process with pid 66581 00:10:07.560 11:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:07.560 11:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66581' 00:10:07.560 Received shutdown signal, test time was about 10.000000 seconds 00:10:07.560 00:10:07.560 Latency(us) 00:10:07.560 [2024-11-20T11:22:53.149Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:07.560 [2024-11-20T11:22:53.149Z] =================================================================================================================== 00:10:07.560 [2024-11-20T11:22:53.149Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:07.560 11:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 66581 00:10:07.560 11:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 66581 00:10:07.560 11:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:07.819 11:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:08.386 11:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dbdf1da0-c6ca-48ad-9882-a26c79d47c54 00:10:08.386 11:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:08.386 11:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:08.386 11:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:10:08.386 11:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 65963 00:10:08.386 11:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 65963 00:10:08.645 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 65963 Killed "${NVMF_APP[@]}" "$@" 00:10:08.645 11:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:10:08.645 11:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:10:08.645 11:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:08.645 11:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:08.645 11:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:08.645 11:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=66797 00:10:08.645 11:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 66797 00:10:08.645 11:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:08.645 11:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 66797 ']' 00:10:08.645 11:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:08.645 11:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:08.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:08.645 11:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:08.645 11:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:08.645 11:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:08.645 [2024-11-20 11:22:54.044943] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:10:08.645 [2024-11-20 11:22:54.045041] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:08.645 [2024-11-20 11:22:54.193119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:08.645 [2024-11-20 11:22:54.226116] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:08.645 [2024-11-20 11:22:54.226169] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:08.645 [2024-11-20 11:22:54.226181] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:08.645 [2024-11-20 11:22:54.226189] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:08.645 [2024-11-20 11:22:54.226196] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:08.645 [2024-11-20 11:22:54.226497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.904 11:22:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:08.904 11:22:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:10:08.904 11:22:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:08.904 11:22:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:08.904 11:22:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:08.904 11:22:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:08.904 11:22:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:09.162 [2024-11-20 11:22:54.639719] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:10:09.162 [2024-11-20 11:22:54.639978] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:10:09.162 [2024-11-20 11:22:54.640129] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:10:09.162 11:22:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:10:09.162 11:22:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 497d12eb-5265-4971-adc3-2ae2b89f2e2b 00:10:09.162 11:22:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=497d12eb-5265-4971-adc3-2ae2b89f2e2b 00:10:09.162 11:22:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:09.162 11:22:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:10:09.162 11:22:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:09.162 11:22:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:09.162 11:22:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:09.421 11:22:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 497d12eb-5265-4971-adc3-2ae2b89f2e2b -t 2000 00:10:09.679 [ 00:10:09.679 { 00:10:09.679 "aliases": [ 00:10:09.679 "lvs/lvol" 00:10:09.679 ], 00:10:09.679 "assigned_rate_limits": { 00:10:09.679 "r_mbytes_per_sec": 0, 00:10:09.679 "rw_ios_per_sec": 0, 00:10:09.679 "rw_mbytes_per_sec": 0, 00:10:09.679 "w_mbytes_per_sec": 0 00:10:09.679 }, 00:10:09.679 "block_size": 4096, 00:10:09.679 "claimed": false, 00:10:09.679 "driver_specific": { 00:10:09.679 "lvol": { 00:10:09.679 "base_bdev": "aio_bdev", 00:10:09.679 "clone": false, 00:10:09.679 "esnap_clone": false, 00:10:09.679 "lvol_store_uuid": "dbdf1da0-c6ca-48ad-9882-a26c79d47c54", 00:10:09.679 "num_allocated_clusters": 38, 00:10:09.679 "snapshot": false, 00:10:09.679 "thin_provision": false 00:10:09.679 } 00:10:09.679 }, 00:10:09.679 "name": "497d12eb-5265-4971-adc3-2ae2b89f2e2b", 00:10:09.679 "num_blocks": 38912, 00:10:09.679 "product_name": "Logical Volume", 00:10:09.679 "supported_io_types": { 00:10:09.679 "abort": false, 00:10:09.679 "compare": false, 00:10:09.679 "compare_and_write": false, 00:10:09.679 "copy": false, 00:10:09.679 "flush": false, 00:10:09.679 "get_zone_info": false, 00:10:09.679 "nvme_admin": false, 00:10:09.679 "nvme_io": false, 00:10:09.679 "nvme_io_md": false, 00:10:09.679 "nvme_iov_md": false, 00:10:09.679 "read": true, 00:10:09.679 "reset": true, 00:10:09.679 "seek_data": true, 00:10:09.679 "seek_hole": true, 00:10:09.679 "unmap": true, 00:10:09.679 "write": true, 00:10:09.679 "write_zeroes": true, 00:10:09.679 "zcopy": false, 00:10:09.679 "zone_append": false, 00:10:09.679 "zone_management": false 00:10:09.679 }, 00:10:09.679 "uuid": "497d12eb-5265-4971-adc3-2ae2b89f2e2b", 00:10:09.679 "zoned": false 00:10:09.679 } 00:10:09.679 ] 00:10:09.679 11:22:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:10:09.679 11:22:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dbdf1da0-c6ca-48ad-9882-a26c79d47c54 00:10:09.679 11:22:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:10:09.937 11:22:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:10:09.937 11:22:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dbdf1da0-c6ca-48ad-9882-a26c79d47c54 00:10:09.937 11:22:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:10:10.505 11:22:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:10:10.505 11:22:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:10.764 [2024-11-20 11:22:56.105378] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:10.764 11:22:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dbdf1da0-c6ca-48ad-9882-a26c79d47c54 00:10:10.764 11:22:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:10:10.764 11:22:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dbdf1da0-c6ca-48ad-9882-a26c79d47c54 00:10:10.764 11:22:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:10.764 11:22:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:10.764 11:22:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:10.764 11:22:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:10.764 11:22:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:10.764 11:22:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:10.764 11:22:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:10.764 11:22:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:10.764 11:22:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dbdf1da0-c6ca-48ad-9882-a26c79d47c54 00:10:11.024 2024/11/20 11:22:56 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:dbdf1da0-c6ca-48ad-9882-a26c79d47c54], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:10:11.024 request: 00:10:11.024 { 00:10:11.024 "method": "bdev_lvol_get_lvstores", 00:10:11.024 "params": { 00:10:11.024 "uuid": "dbdf1da0-c6ca-48ad-9882-a26c79d47c54" 00:10:11.024 } 00:10:11.024 } 00:10:11.024 Got JSON-RPC error response 00:10:11.024 GoRPCClient: error on JSON-RPC call 00:10:11.024 11:22:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:10:11.024 11:22:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:11.024 11:22:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:11.024 11:22:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:11.024 11:22:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:11.282 aio_bdev 00:10:11.282 11:22:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 497d12eb-5265-4971-adc3-2ae2b89f2e2b 00:10:11.282 11:22:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=497d12eb-5265-4971-adc3-2ae2b89f2e2b 00:10:11.282 11:22:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:11.282 11:22:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:10:11.282 11:22:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:11.282 11:22:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:11.282 11:22:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:11.540 11:22:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 497d12eb-5265-4971-adc3-2ae2b89f2e2b -t 2000 00:10:11.799 [ 00:10:11.799 { 00:10:11.799 "aliases": [ 00:10:11.799 "lvs/lvol" 00:10:11.799 ], 00:10:11.799 "assigned_rate_limits": { 00:10:11.799 "r_mbytes_per_sec": 0, 00:10:11.799 "rw_ios_per_sec": 0, 00:10:11.799 "rw_mbytes_per_sec": 0, 00:10:11.799 "w_mbytes_per_sec": 0 00:10:11.799 }, 00:10:11.799 "block_size": 4096, 00:10:11.799 "claimed": false, 00:10:11.799 "driver_specific": { 00:10:11.799 "lvol": { 00:10:11.799 "base_bdev": "aio_bdev", 00:10:11.799 "clone": false, 00:10:11.799 "esnap_clone": false, 00:10:11.799 "lvol_store_uuid": "dbdf1da0-c6ca-48ad-9882-a26c79d47c54", 00:10:11.799 "num_allocated_clusters": 38, 00:10:11.799 "snapshot": false, 00:10:11.799 "thin_provision": false 00:10:11.799 } 00:10:11.799 }, 00:10:11.799 "name": "497d12eb-5265-4971-adc3-2ae2b89f2e2b", 00:10:11.799 "num_blocks": 38912, 00:10:11.799 "product_name": "Logical Volume", 00:10:11.799 "supported_io_types": { 00:10:11.799 "abort": false, 00:10:11.799 "compare": false, 00:10:11.799 "compare_and_write": false, 00:10:11.799 "copy": false, 00:10:11.799 "flush": false, 00:10:11.799 "get_zone_info": false, 00:10:11.799 "nvme_admin": false, 00:10:11.799 "nvme_io": false, 00:10:11.799 "nvme_io_md": false, 00:10:11.799 "nvme_iov_md": false, 00:10:11.799 "read": true, 00:10:11.799 "reset": true, 00:10:11.799 "seek_data": true, 00:10:11.799 "seek_hole": true, 00:10:11.799 "unmap": true, 00:10:11.799 "write": true, 00:10:11.799 "write_zeroes": true, 00:10:11.799 "zcopy": false, 00:10:11.799 "zone_append": false, 00:10:11.799 "zone_management": false 00:10:11.799 }, 00:10:11.799 "uuid": "497d12eb-5265-4971-adc3-2ae2b89f2e2b", 00:10:11.799 "zoned": false 00:10:11.799 } 00:10:11.799 ] 00:10:11.799 11:22:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:10:11.799 11:22:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:11.799 11:22:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dbdf1da0-c6ca-48ad-9882-a26c79d47c54 00:10:12.402 11:22:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:12.402 11:22:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dbdf1da0-c6ca-48ad-9882-a26c79d47c54 00:10:12.402 11:22:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:12.661 11:22:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:12.661 11:22:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 497d12eb-5265-4971-adc3-2ae2b89f2e2b 00:10:12.920 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u dbdf1da0-c6ca-48ad-9882-a26c79d47c54 00:10:13.179 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:13.437 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:14.005 ************************************ 00:10:14.005 END TEST lvs_grow_dirty 00:10:14.005 ************************************ 00:10:14.005 00:10:14.005 real 0m21.684s 00:10:14.005 user 0m46.638s 00:10:14.005 sys 0m7.667s 00:10:14.005 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:14.005 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:14.005 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:10:14.005 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:10:14.005 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:10:14.005 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:10:14.005 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:10:14.005 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:10:14.005 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:10:14.005 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:10:14.005 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:10:14.005 nvmf_trace.0 00:10:14.005 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:10:14.005 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:10:14.005 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:14.005 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:10:14.005 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:14.005 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:10:14.005 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:14.005 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:14.005 rmmod nvme_tcp 00:10:14.005 rmmod nvme_fabrics 00:10:14.263 rmmod nvme_keyring 00:10:14.263 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:14.263 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:10:14.263 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:10:14.263 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 66797 ']' 00:10:14.263 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 66797 00:10:14.263 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 66797 ']' 00:10:14.263 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 66797 00:10:14.263 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:10:14.263 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:14.263 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66797 00:10:14.263 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:14.263 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:14.263 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66797' 00:10:14.263 killing process with pid 66797 00:10:14.263 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 66797 00:10:14.263 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 66797 00:10:14.263 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:14.263 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:14.263 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:14.263 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:10:14.263 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:10:14.263 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:14.263 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:10:14.263 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:14.263 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:14.263 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:14.263 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:14.263 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:14.263 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:14.522 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:14.522 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:14.522 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:14.522 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:14.522 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:14.522 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:14.522 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:14.522 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:14.522 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:14.522 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:14.523 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:14.523 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:14.523 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:14.523 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:10:14.523 ************************************ 00:10:14.523 END TEST nvmf_lvs_grow 00:10:14.523 ************************************ 00:10:14.523 00:10:14.523 real 0m43.529s 00:10:14.523 user 1m11.656s 00:10:14.523 sys 0m10.637s 00:10:14.523 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:14.523 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:14.523 11:23:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:14.523 11:23:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:14.523 11:23:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:14.523 11:23:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:14.523 ************************************ 00:10:14.523 START TEST nvmf_bdev_io_wait 00:10:14.523 ************************************ 00:10:14.523 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:14.782 * Looking for test storage... 00:10:14.782 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:14.782 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:14.782 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:10:14.782 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:14.782 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:14.782 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:14.782 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:14.782 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:14.782 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:10:14.782 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:10:14.782 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:10:14.782 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:10:14.782 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:10:14.782 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:10:14.782 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:10:14.782 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:14.782 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:10:14.782 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:10:14.782 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:14.782 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:14.782 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:10:14.782 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:10:14.782 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:14.782 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:10:14.782 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:10:14.782 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:10:14.782 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:10:14.782 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:14.782 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:10:14.782 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:10:14.782 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:14.782 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:14.782 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:10:14.782 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:14.782 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:14.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.782 --rc genhtml_branch_coverage=1 00:10:14.782 --rc genhtml_function_coverage=1 00:10:14.782 --rc genhtml_legend=1 00:10:14.782 --rc geninfo_all_blocks=1 00:10:14.782 --rc geninfo_unexecuted_blocks=1 00:10:14.782 00:10:14.782 ' 00:10:14.782 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:14.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.782 --rc genhtml_branch_coverage=1 00:10:14.782 --rc genhtml_function_coverage=1 00:10:14.782 --rc genhtml_legend=1 00:10:14.782 --rc geninfo_all_blocks=1 00:10:14.782 --rc geninfo_unexecuted_blocks=1 00:10:14.782 00:10:14.782 ' 00:10:14.782 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:14.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.782 --rc genhtml_branch_coverage=1 00:10:14.782 --rc genhtml_function_coverage=1 00:10:14.782 --rc genhtml_legend=1 00:10:14.782 --rc geninfo_all_blocks=1 00:10:14.782 --rc geninfo_unexecuted_blocks=1 00:10:14.782 00:10:14.782 ' 00:10:14.782 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:14.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.783 --rc genhtml_branch_coverage=1 00:10:14.783 --rc genhtml_function_coverage=1 00:10:14.783 --rc genhtml_legend=1 00:10:14.783 --rc geninfo_all_blocks=1 00:10:14.783 --rc geninfo_unexecuted_blocks=1 00:10:14.783 00:10:14.783 ' 00:10:14.783 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:14.783 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:10:14.783 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:14.783 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:14.783 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:14.783 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:14.783 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:14.783 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:14.783 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:14.783 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:14.783 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:14.783 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:14.783 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:10:14.783 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=806d442c-08ba-4719-b76d-33169283459a 00:10:14.783 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:14.783 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:14.783 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:14.783 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:14.783 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:14.783 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:10:14.783 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:14.783 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:14.783 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:14.783 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.783 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.783 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.783 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:10:14.783 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.783 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:10:14.783 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:14.783 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:14.783 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:14.783 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:14.783 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:14.783 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:14.783 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:14.783 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:14.783 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:14.783 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:14.783 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:14.783 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:14.783 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:10:14.783 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:14.783 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:14.783 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:14.783 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:14.783 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:14.783 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:14.783 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:14.783 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:14.783 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:14.783 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:14.783 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:14.783 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:14.783 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:14.783 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:14.783 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:14.783 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:14.783 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:14.783 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:14.783 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:14.783 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:14.783 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:14.783 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:14.783 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:14.783 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:14.783 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:14.783 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:14.783 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:14.783 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:14.783 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:14.783 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:14.783 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:14.783 Cannot find device "nvmf_init_br" 00:10:14.783 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:10:14.783 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:14.783 Cannot find device "nvmf_init_br2" 00:10:14.784 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:10:14.784 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:14.784 Cannot find device "nvmf_tgt_br" 00:10:14.784 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:10:14.784 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:14.784 Cannot find device "nvmf_tgt_br2" 00:10:14.784 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:10:14.784 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:14.784 Cannot find device "nvmf_init_br" 00:10:14.784 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:10:14.784 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:14.784 Cannot find device "nvmf_init_br2" 00:10:14.784 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:10:14.784 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:14.784 Cannot find device "nvmf_tgt_br" 00:10:14.784 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:10:14.784 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:14.784 Cannot find device "nvmf_tgt_br2" 00:10:14.784 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:10:14.784 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:15.042 Cannot find device "nvmf_br" 00:10:15.042 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:10:15.042 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:15.042 Cannot find device "nvmf_init_if" 00:10:15.042 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:10:15.042 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:15.042 Cannot find device "nvmf_init_if2" 00:10:15.042 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:10:15.042 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:15.042 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:15.042 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:10:15.043 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:15.043 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:15.043 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:10:15.043 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:15.043 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:15.043 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:15.043 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:15.043 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:15.043 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:15.043 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:15.043 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:15.043 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:15.043 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:15.043 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:15.043 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:15.043 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:15.043 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:15.043 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:15.043 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:15.043 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:15.043 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:15.043 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:15.043 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:15.043 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:15.043 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:15.043 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:15.043 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:15.043 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:15.043 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:15.301 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:15.301 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:15.301 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:15.301 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:15.301 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:15.301 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:15.301 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:15.301 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:15.301 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.112 ms 00:10:15.301 00:10:15.301 --- 10.0.0.3 ping statistics --- 00:10:15.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:15.301 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:10:15.301 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:15.301 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:15.301 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:10:15.301 00:10:15.301 --- 10.0.0.4 ping statistics --- 00:10:15.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:15.301 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:10:15.301 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:15.301 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:15.301 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:10:15.301 00:10:15.301 --- 10.0.0.1 ping statistics --- 00:10:15.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:15.301 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:10:15.301 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:15.301 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:15.301 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.040 ms 00:10:15.301 00:10:15.301 --- 10.0.0.2 ping statistics --- 00:10:15.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:15.301 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:10:15.301 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:15.301 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:10:15.301 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:15.301 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:15.301 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:15.301 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:15.301 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:15.301 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:15.301 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:15.301 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:10:15.301 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:15.301 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:15.301 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:15.301 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=67252 00:10:15.301 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 67252 00:10:15.301 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:10:15.301 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 67252 ']' 00:10:15.301 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:15.301 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:15.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:15.301 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:15.301 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:15.301 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:15.301 [2024-11-20 11:23:00.743106] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:10:15.301 [2024-11-20 11:23:00.743193] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:15.558 [2024-11-20 11:23:00.894357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:15.558 [2024-11-20 11:23:00.938015] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:15.558 [2024-11-20 11:23:00.938067] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:15.558 [2024-11-20 11:23:00.938081] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:15.558 [2024-11-20 11:23:00.938091] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:15.558 [2024-11-20 11:23:00.938100] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:15.558 [2024-11-20 11:23:00.939062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:15.558 [2024-11-20 11:23:00.939129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:15.558 [2024-11-20 11:23:00.939275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:15.558 [2024-11-20 11:23:00.939290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.558 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:15.558 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:10:15.558 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:15.558 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:15.558 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:15.558 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:15.558 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:10:15.558 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.558 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:15.558 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.558 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:10:15.558 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.558 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:15.559 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.559 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:15.559 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.559 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:15.559 [2024-11-20 11:23:01.111491] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:15.559 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.559 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:15.559 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.559 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:15.559 Malloc0 00:10:15.559 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.559 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:15.559 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.559 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:15.816 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.816 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:15.816 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.816 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:15.816 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.816 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:15.816 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.816 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:15.816 [2024-11-20 11:23:01.159246] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:15.816 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.816 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=67297 00:10:15.816 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:10:15.816 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:10:15.816 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:15.816 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:15.816 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:15.816 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=67299 00:10:15.816 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:15.816 { 00:10:15.816 "params": { 00:10:15.816 "name": "Nvme$subsystem", 00:10:15.816 "trtype": "$TEST_TRANSPORT", 00:10:15.816 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:15.816 "adrfam": "ipv4", 00:10:15.816 "trsvcid": "$NVMF_PORT", 00:10:15.816 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:15.816 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:15.816 "hdgst": ${hdgst:-false}, 00:10:15.816 "ddgst": ${ddgst:-false} 00:10:15.816 }, 00:10:15.816 "method": "bdev_nvme_attach_controller" 00:10:15.816 } 00:10:15.816 EOF 00:10:15.816 )") 00:10:15.816 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:10:15.816 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:10:15.816 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:15.816 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:15.816 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:15.816 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=67301 00:10:15.816 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:15.816 { 00:10:15.816 "params": { 00:10:15.816 "name": "Nvme$subsystem", 00:10:15.816 "trtype": "$TEST_TRANSPORT", 00:10:15.816 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:15.816 "adrfam": "ipv4", 00:10:15.816 "trsvcid": "$NVMF_PORT", 00:10:15.816 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:15.816 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:15.816 "hdgst": ${hdgst:-false}, 00:10:15.816 "ddgst": ${ddgst:-false} 00:10:15.816 }, 00:10:15.816 "method": "bdev_nvme_attach_controller" 00:10:15.816 } 00:10:15.816 EOF 00:10:15.816 )") 00:10:15.816 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:15.816 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:10:15.816 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:15.816 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=67304 00:10:15.816 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:10:15.816 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:10:15.816 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:15.816 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:15.816 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:15.816 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:15.816 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:15.816 { 00:10:15.816 "params": { 00:10:15.816 "name": "Nvme$subsystem", 00:10:15.816 "trtype": "$TEST_TRANSPORT", 00:10:15.816 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:15.816 "adrfam": "ipv4", 00:10:15.816 "trsvcid": "$NVMF_PORT", 00:10:15.816 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:15.816 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:15.816 "hdgst": ${hdgst:-false}, 00:10:15.816 "ddgst": ${ddgst:-false} 00:10:15.816 }, 00:10:15.816 "method": "bdev_nvme_attach_controller" 00:10:15.816 } 00:10:15.816 EOF 00:10:15.816 )") 00:10:15.816 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:10:15.816 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:15.816 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:15.816 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:15.816 "params": { 00:10:15.816 "name": "Nvme1", 00:10:15.816 "trtype": "tcp", 00:10:15.816 "traddr": "10.0.0.3", 00:10:15.816 "adrfam": "ipv4", 00:10:15.816 "trsvcid": "4420", 00:10:15.816 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:15.816 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:15.816 "hdgst": false, 00:10:15.816 "ddgst": false 00:10:15.816 }, 00:10:15.816 "method": "bdev_nvme_attach_controller" 00:10:15.816 }' 00:10:15.816 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:15.816 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:15.816 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:15.816 "params": { 00:10:15.816 "name": "Nvme1", 00:10:15.816 "trtype": "tcp", 00:10:15.816 "traddr": "10.0.0.3", 00:10:15.816 "adrfam": "ipv4", 00:10:15.817 "trsvcid": "4420", 00:10:15.817 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:15.817 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:15.817 "hdgst": false, 00:10:15.817 "ddgst": false 00:10:15.817 }, 00:10:15.817 "method": "bdev_nvme_attach_controller" 00:10:15.817 }' 00:10:15.817 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:10:15.817 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:15.817 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:15.817 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:15.817 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:15.817 { 00:10:15.817 "params": { 00:10:15.817 "name": "Nvme$subsystem", 00:10:15.817 "trtype": "$TEST_TRANSPORT", 00:10:15.817 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:15.817 "adrfam": "ipv4", 00:10:15.817 "trsvcid": "$NVMF_PORT", 00:10:15.817 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:15.817 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:15.817 "hdgst": ${hdgst:-false}, 00:10:15.817 "ddgst": ${ddgst:-false} 00:10:15.817 }, 00:10:15.817 "method": "bdev_nvme_attach_controller" 00:10:15.817 } 00:10:15.817 EOF 00:10:15.817 )") 00:10:15.817 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:15.817 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:15.817 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:15.817 "params": { 00:10:15.817 "name": "Nvme1", 00:10:15.817 "trtype": "tcp", 00:10:15.817 "traddr": "10.0.0.3", 00:10:15.817 "adrfam": "ipv4", 00:10:15.817 "trsvcid": "4420", 00:10:15.817 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:15.817 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:15.817 "hdgst": false, 00:10:15.817 "ddgst": false 00:10:15.817 }, 00:10:15.817 "method": "bdev_nvme_attach_controller" 00:10:15.817 }' 00:10:15.817 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:15.817 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:15.817 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:15.817 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:15.817 "params": { 00:10:15.817 "name": "Nvme1", 00:10:15.817 "trtype": "tcp", 00:10:15.817 "traddr": "10.0.0.3", 00:10:15.817 "adrfam": "ipv4", 00:10:15.817 "trsvcid": "4420", 00:10:15.817 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:15.817 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:15.817 "hdgst": false, 00:10:15.817 "ddgst": false 00:10:15.817 }, 00:10:15.817 "method": "bdev_nvme_attach_controller" 00:10:15.817 }' 00:10:15.817 [2024-11-20 11:23:01.229026] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:10:15.817 [2024-11-20 11:23:01.229275] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:10:15.817 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 67297 00:10:15.817 [2024-11-20 11:23:01.235904] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:10:15.817 [2024-11-20 11:23:01.235986] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:10:15.817 [2024-11-20 11:23:01.249631] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:10:15.817 [2024-11-20 11:23:01.249861] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:10:15.817 [2024-11-20 11:23:01.266154] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:10:15.817 [2024-11-20 11:23:01.266497] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:10:16.075 [2024-11-20 11:23:01.419806] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.075 [2024-11-20 11:23:01.452680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:16.075 [2024-11-20 11:23:01.458492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.075 [2024-11-20 11:23:01.485113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:16.075 [2024-11-20 11:23:01.508132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.075 [2024-11-20 11:23:01.540009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:16.075 [2024-11-20 11:23:01.551337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.075 Running I/O for 1 seconds... 00:10:16.075 [2024-11-20 11:23:01.582667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:10:16.075 Running I/O for 1 seconds... 00:10:16.075 Running I/O for 1 seconds... 00:10:16.332 Running I/O for 1 seconds... 00:10:17.264 178464.00 IOPS, 697.12 MiB/s 00:10:17.264 Latency(us) 00:10:17.264 [2024-11-20T11:23:02.853Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:17.264 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:10:17.264 Nvme1n1 : 1.00 178074.72 695.60 0.00 0.00 714.60 305.34 2144.81 00:10:17.264 [2024-11-20T11:23:02.853Z] =================================================================================================================== 00:10:17.264 [2024-11-20T11:23:02.853Z] Total : 178074.72 695.60 0.00 0.00 714.60 305.34 2144.81 00:10:17.264 9821.00 IOPS, 38.36 MiB/s 00:10:17.264 Latency(us) 00:10:17.264 [2024-11-20T11:23:02.853Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:17.264 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:10:17.264 Nvme1n1 : 1.01 9855.77 38.50 0.00 0.00 12920.22 8340.95 21090.68 00:10:17.264 [2024-11-20T11:23:02.853Z] =================================================================================================================== 00:10:17.264 [2024-11-20T11:23:02.853Z] Total : 9855.77 38.50 0.00 0.00 12920.22 8340.95 21090.68 00:10:17.264 7050.00 IOPS, 27.54 MiB/s 00:10:17.264 Latency(us) 00:10:17.264 [2024-11-20T11:23:02.853Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:17.264 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:10:17.264 Nvme1n1 : 1.01 7083.52 27.67 0.00 0.00 17946.51 9234.62 25380.31 00:10:17.264 [2024-11-20T11:23:02.853Z] =================================================================================================================== 00:10:17.264 [2024-11-20T11:23:02.853Z] Total : 7083.52 27.67 0.00 0.00 17946.51 9234.62 25380.31 00:10:17.264 8368.00 IOPS, 32.69 MiB/s 00:10:17.264 Latency(us) 00:10:17.264 [2024-11-20T11:23:02.853Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:17.264 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:10:17.264 Nvme1n1 : 1.01 8446.10 32.99 0.00 0.00 15104.55 4408.79 24665.37 00:10:17.264 [2024-11-20T11:23:02.853Z] =================================================================================================================== 00:10:17.264 [2024-11-20T11:23:02.853Z] Total : 8446.10 32.99 0.00 0.00 15104.55 4408.79 24665.37 00:10:17.264 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 67299 00:10:17.264 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 67301 00:10:17.264 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 67304 00:10:17.264 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:17.264 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.264 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:17.264 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.264 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:10:17.264 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:10:17.264 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:17.264 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:10:17.560 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:17.560 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:10:17.560 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:17.560 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:17.560 rmmod nvme_tcp 00:10:17.560 rmmod nvme_fabrics 00:10:17.560 rmmod nvme_keyring 00:10:17.560 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:17.560 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:10:17.560 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:10:17.560 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 67252 ']' 00:10:17.560 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 67252 00:10:17.560 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 67252 ']' 00:10:17.560 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 67252 00:10:17.560 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:10:17.560 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:17.560 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67252 00:10:17.560 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:17.560 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:17.560 killing process with pid 67252 00:10:17.560 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67252' 00:10:17.560 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 67252 00:10:17.560 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 67252 00:10:17.560 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:17.560 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:17.560 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:17.560 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:10:17.560 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:17.560 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:10:17.560 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:10:17.560 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:17.560 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:17.560 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:17.560 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:17.560 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:17.560 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:17.817 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:17.817 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:17.817 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:17.817 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:17.817 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:17.817 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:17.817 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:17.817 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:17.817 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:17.817 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:17.817 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:17.817 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:17.817 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:17.817 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:10:17.817 00:10:17.817 real 0m3.294s 00:10:17.817 user 0m12.817s 00:10:17.817 sys 0m1.928s 00:10:17.817 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:17.817 ************************************ 00:10:17.817 END TEST nvmf_bdev_io_wait 00:10:17.817 ************************************ 00:10:17.817 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:18.076 11:23:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:18.076 11:23:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:18.076 11:23:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:18.076 11:23:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:18.076 ************************************ 00:10:18.076 START TEST nvmf_queue_depth 00:10:18.076 ************************************ 00:10:18.076 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:18.076 * Looking for test storage... 00:10:18.076 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:18.076 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:18.076 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:10:18.076 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:18.076 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:18.076 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:18.076 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:18.076 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:18.076 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:10:18.076 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:10:18.076 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:10:18.076 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:10:18.076 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:10:18.076 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:10:18.077 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:10:18.077 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:18.077 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:10:18.077 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:10:18.077 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:18.077 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:18.077 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:10:18.077 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:10:18.077 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:18.077 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:10:18.077 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:10:18.077 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:10:18.077 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:10:18.077 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:18.077 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:10:18.077 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:10:18.077 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:18.077 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:18.077 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:10:18.077 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:18.077 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:18.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.077 --rc genhtml_branch_coverage=1 00:10:18.077 --rc genhtml_function_coverage=1 00:10:18.077 --rc genhtml_legend=1 00:10:18.077 --rc geninfo_all_blocks=1 00:10:18.077 --rc geninfo_unexecuted_blocks=1 00:10:18.077 00:10:18.077 ' 00:10:18.077 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:18.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.077 --rc genhtml_branch_coverage=1 00:10:18.077 --rc genhtml_function_coverage=1 00:10:18.077 --rc genhtml_legend=1 00:10:18.077 --rc geninfo_all_blocks=1 00:10:18.077 --rc geninfo_unexecuted_blocks=1 00:10:18.077 00:10:18.077 ' 00:10:18.077 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:18.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.077 --rc genhtml_branch_coverage=1 00:10:18.077 --rc genhtml_function_coverage=1 00:10:18.077 --rc genhtml_legend=1 00:10:18.077 --rc geninfo_all_blocks=1 00:10:18.077 --rc geninfo_unexecuted_blocks=1 00:10:18.077 00:10:18.077 ' 00:10:18.077 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:18.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.077 --rc genhtml_branch_coverage=1 00:10:18.077 --rc genhtml_function_coverage=1 00:10:18.077 --rc genhtml_legend=1 00:10:18.077 --rc geninfo_all_blocks=1 00:10:18.077 --rc geninfo_unexecuted_blocks=1 00:10:18.077 00:10:18.077 ' 00:10:18.077 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:18.077 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:10:18.077 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:18.077 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:18.077 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:18.077 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:18.077 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:18.077 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:18.077 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:18.077 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:18.077 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:18.077 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:18.077 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:10:18.077 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=806d442c-08ba-4719-b76d-33169283459a 00:10:18.077 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:18.077 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:18.077 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:18.077 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:18.077 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:18.077 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:10:18.077 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:18.077 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:18.077 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:18.077 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.077 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.077 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.077 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:10:18.077 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.077 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:10:18.077 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:18.077 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:18.077 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:18.077 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:18.077 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:18.077 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:18.077 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:18.077 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:18.077 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:18.077 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:18.077 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:10:18.077 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:10:18.077 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:18.077 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:10:18.077 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:18.077 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:18.077 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:18.077 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:18.077 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:18.077 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:18.077 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:18.077 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:18.077 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:18.077 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:18.077 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:18.078 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:18.078 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:18.078 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:18.078 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:18.078 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:18.078 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:18.078 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:18.078 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:18.078 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:18.078 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:18.078 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:18.078 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:18.078 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:18.078 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:18.078 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:18.078 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:18.078 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:18.078 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:18.078 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:18.078 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:18.078 Cannot find device "nvmf_init_br" 00:10:18.078 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:10:18.078 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:18.078 Cannot find device "nvmf_init_br2" 00:10:18.078 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:10:18.078 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:18.335 Cannot find device "nvmf_tgt_br" 00:10:18.335 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:10:18.335 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:18.335 Cannot find device "nvmf_tgt_br2" 00:10:18.335 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:10:18.335 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:18.335 Cannot find device "nvmf_init_br" 00:10:18.335 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:10:18.335 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:18.335 Cannot find device "nvmf_init_br2" 00:10:18.335 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:10:18.335 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:18.335 Cannot find device "nvmf_tgt_br" 00:10:18.335 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:10:18.335 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:18.335 Cannot find device "nvmf_tgt_br2" 00:10:18.335 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:10:18.336 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:18.336 Cannot find device "nvmf_br" 00:10:18.336 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:10:18.336 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:18.336 Cannot find device "nvmf_init_if" 00:10:18.336 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:10:18.336 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:18.336 Cannot find device "nvmf_init_if2" 00:10:18.336 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:10:18.336 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:18.336 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:18.336 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:10:18.336 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:18.336 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:18.336 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:10:18.336 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:18.336 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:18.336 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:18.336 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:18.336 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:18.336 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:18.336 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:18.336 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:18.336 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:18.336 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:18.336 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:18.336 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:18.336 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:18.336 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:18.336 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:18.336 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:18.336 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:18.336 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:18.336 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:18.594 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:18.594 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:18.594 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:18.594 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:18.594 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:18.594 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:18.594 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:18.594 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:18.594 11:23:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:18.594 11:23:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:18.594 11:23:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:18.594 11:23:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:18.594 11:23:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:18.594 11:23:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:18.594 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:18.594 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:10:18.594 00:10:18.594 --- 10.0.0.3 ping statistics --- 00:10:18.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:18.594 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:10:18.594 11:23:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:18.594 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:18.594 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:10:18.594 00:10:18.594 --- 10.0.0.4 ping statistics --- 00:10:18.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:18.595 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:10:18.595 11:23:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:18.595 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:18.595 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:10:18.595 00:10:18.595 --- 10.0.0.1 ping statistics --- 00:10:18.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:18.595 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:10:18.595 11:23:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:18.595 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:18.595 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:10:18.595 00:10:18.595 --- 10.0.0.2 ping statistics --- 00:10:18.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:18.595 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:10:18.595 11:23:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:18.595 11:23:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:10:18.595 11:23:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:18.595 11:23:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:18.595 11:23:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:18.595 11:23:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:18.595 11:23:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:18.595 11:23:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:18.595 11:23:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:18.595 11:23:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:10:18.595 11:23:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:18.595 11:23:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:18.595 11:23:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:18.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:18.595 11:23:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=67563 00:10:18.595 11:23:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 67563 00:10:18.595 11:23:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 67563 ']' 00:10:18.595 11:23:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:18.595 11:23:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:18.595 11:23:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:18.595 11:23:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:18.595 11:23:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:18.595 11:23:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:18.595 [2024-11-20 11:23:04.113837] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:10:18.595 [2024-11-20 11:23:04.113941] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:18.854 [2024-11-20 11:23:04.325117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.854 [2024-11-20 11:23:04.366057] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:18.854 [2024-11-20 11:23:04.366113] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:18.854 [2024-11-20 11:23:04.366125] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:18.854 [2024-11-20 11:23:04.366133] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:18.854 [2024-11-20 11:23:04.366140] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:18.854 [2024-11-20 11:23:04.366432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:19.790 11:23:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:19.790 11:23:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:10:19.790 11:23:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:19.790 11:23:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:19.790 11:23:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:19.790 11:23:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:19.790 11:23:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:19.790 11:23:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.790 11:23:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:19.790 [2024-11-20 11:23:05.222058] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:19.790 11:23:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.790 11:23:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:19.790 11:23:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.790 11:23:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:19.790 Malloc0 00:10:19.790 11:23:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.790 11:23:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:19.790 11:23:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.790 11:23:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:19.790 11:23:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.790 11:23:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:19.790 11:23:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.790 11:23:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:19.790 11:23:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.790 11:23:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:19.790 11:23:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.790 11:23:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:19.790 [2024-11-20 11:23:05.260726] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:19.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:19.790 11:23:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.790 11:23:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=67614 00:10:19.790 11:23:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:10:19.790 11:23:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:19.790 11:23:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 67614 /var/tmp/bdevperf.sock 00:10:19.790 11:23:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 67614 ']' 00:10:19.790 11:23:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:19.790 11:23:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:19.790 11:23:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:19.790 11:23:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:19.790 11:23:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:19.790 [2024-11-20 11:23:05.327045] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:10:19.790 [2024-11-20 11:23:05.327385] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67614 ] 00:10:20.049 [2024-11-20 11:23:05.476591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.049 [2024-11-20 11:23:05.527981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.049 11:23:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:20.049 11:23:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:10:20.049 11:23:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:10:20.049 11:23:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.049 11:23:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:20.308 NVMe0n1 00:10:20.308 11:23:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.308 11:23:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:20.308 Running I/O for 10 seconds... 00:10:22.617 7765.00 IOPS, 30.33 MiB/s [2024-11-20T11:23:09.141Z] 8086.50 IOPS, 31.59 MiB/s [2024-11-20T11:23:10.076Z] 8031.33 IOPS, 31.37 MiB/s [2024-11-20T11:23:11.011Z] 8028.25 IOPS, 31.36 MiB/s [2024-11-20T11:23:11.968Z] 8160.60 IOPS, 31.88 MiB/s [2024-11-20T11:23:12.918Z] 8170.50 IOPS, 31.92 MiB/s [2024-11-20T11:23:14.293Z] 8177.86 IOPS, 31.94 MiB/s [2024-11-20T11:23:14.861Z] 8271.50 IOPS, 32.31 MiB/s [2024-11-20T11:23:16.238Z] 8278.33 IOPS, 32.34 MiB/s [2024-11-20T11:23:16.238Z] 8350.60 IOPS, 32.62 MiB/s 00:10:30.649 Latency(us) 00:10:30.649 [2024-11-20T11:23:16.238Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:30.649 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:30.649 Verification LBA range: start 0x0 length 0x4000 00:10:30.649 NVMe0n1 : 10.09 8373.45 32.71 0.00 0.00 121695.69 27525.12 115819.99 00:10:30.649 [2024-11-20T11:23:16.238Z] =================================================================================================================== 00:10:30.649 [2024-11-20T11:23:16.238Z] Total : 8373.45 32.71 0.00 0.00 121695.69 27525.12 115819.99 00:10:30.649 { 00:10:30.649 "results": [ 00:10:30.649 { 00:10:30.649 "job": "NVMe0n1", 00:10:30.649 "core_mask": "0x1", 00:10:30.649 "workload": "verify", 00:10:30.649 "status": "finished", 00:10:30.649 "verify_range": { 00:10:30.649 "start": 0, 00:10:30.649 "length": 16384 00:10:30.649 }, 00:10:30.649 "queue_depth": 1024, 00:10:30.649 "io_size": 4096, 00:10:30.649 "runtime": 10.088907, 00:10:30.649 "iops": 8373.454131354367, 00:10:30.649 "mibps": 32.708805200602995, 00:10:30.649 "io_failed": 0, 00:10:30.649 "io_timeout": 0, 00:10:30.649 "avg_latency_us": 121695.68872496554, 00:10:30.649 "min_latency_us": 27525.12, 00:10:30.649 "max_latency_us": 115819.98545454546 00:10:30.649 } 00:10:30.649 ], 00:10:30.649 "core_count": 1 00:10:30.649 } 00:10:30.649 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 67614 00:10:30.649 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 67614 ']' 00:10:30.649 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 67614 00:10:30.649 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:10:30.649 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:30.649 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67614 00:10:30.649 killing process with pid 67614 00:10:30.649 Received shutdown signal, test time was about 10.000000 seconds 00:10:30.649 00:10:30.649 Latency(us) 00:10:30.649 [2024-11-20T11:23:16.238Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:30.649 [2024-11-20T11:23:16.238Z] =================================================================================================================== 00:10:30.649 [2024-11-20T11:23:16.238Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:30.649 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:30.649 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:30.649 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67614' 00:10:30.649 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 67614 00:10:30.649 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 67614 00:10:30.649 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:30.649 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:30.649 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:30.649 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:10:30.649 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:30.649 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:10:30.649 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:30.649 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:30.649 rmmod nvme_tcp 00:10:30.649 rmmod nvme_fabrics 00:10:30.649 rmmod nvme_keyring 00:10:30.649 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:30.649 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:10:30.649 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:10:30.649 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 67563 ']' 00:10:30.649 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 67563 00:10:30.649 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 67563 ']' 00:10:30.649 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 67563 00:10:30.908 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:10:30.908 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:30.908 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67563 00:10:30.908 killing process with pid 67563 00:10:30.908 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:30.908 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:30.908 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67563' 00:10:30.908 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 67563 00:10:30.908 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 67563 00:10:30.908 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:30.908 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:30.908 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:30.908 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:10:30.908 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:10:30.908 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:30.908 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:10:30.908 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:30.908 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:30.908 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:30.908 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:30.908 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:30.908 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:30.908 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:30.908 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:30.908 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:30.908 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:31.167 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:31.167 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:31.167 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:31.167 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:31.167 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:31.167 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:31.167 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:31.167 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:31.167 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:31.167 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:10:31.167 00:10:31.167 real 0m13.249s 00:10:31.167 user 0m22.317s 00:10:31.167 sys 0m1.984s 00:10:31.167 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:31.167 ************************************ 00:10:31.167 END TEST nvmf_queue_depth 00:10:31.167 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:31.167 ************************************ 00:10:31.167 11:23:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:31.167 11:23:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:31.167 11:23:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:31.167 11:23:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:31.167 ************************************ 00:10:31.167 START TEST nvmf_target_multipath 00:10:31.167 ************************************ 00:10:31.167 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:31.427 * Looking for test storage... 00:10:31.427 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:31.427 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:31.427 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:10:31.427 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:31.427 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:31.427 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:31.427 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:31.427 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:31.427 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:10:31.427 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:10:31.427 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:10:31.427 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:10:31.427 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:10:31.427 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:10:31.427 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:10:31.427 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:31.427 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:10:31.427 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:10:31.427 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:31.427 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:31.427 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:10:31.427 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:10:31.428 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:31.428 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:10:31.428 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:10:31.428 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:10:31.428 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:10:31.428 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:31.428 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:10:31.428 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:10:31.428 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:31.428 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:31.428 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:10:31.428 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:31.428 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:31.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.428 --rc genhtml_branch_coverage=1 00:10:31.428 --rc genhtml_function_coverage=1 00:10:31.428 --rc genhtml_legend=1 00:10:31.428 --rc geninfo_all_blocks=1 00:10:31.428 --rc geninfo_unexecuted_blocks=1 00:10:31.428 00:10:31.428 ' 00:10:31.428 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:31.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.428 --rc genhtml_branch_coverage=1 00:10:31.428 --rc genhtml_function_coverage=1 00:10:31.428 --rc genhtml_legend=1 00:10:31.428 --rc geninfo_all_blocks=1 00:10:31.428 --rc geninfo_unexecuted_blocks=1 00:10:31.428 00:10:31.428 ' 00:10:31.428 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:31.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.428 --rc genhtml_branch_coverage=1 00:10:31.428 --rc genhtml_function_coverage=1 00:10:31.428 --rc genhtml_legend=1 00:10:31.428 --rc geninfo_all_blocks=1 00:10:31.428 --rc geninfo_unexecuted_blocks=1 00:10:31.428 00:10:31.428 ' 00:10:31.428 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:31.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.428 --rc genhtml_branch_coverage=1 00:10:31.428 --rc genhtml_function_coverage=1 00:10:31.428 --rc genhtml_legend=1 00:10:31.428 --rc geninfo_all_blocks=1 00:10:31.428 --rc geninfo_unexecuted_blocks=1 00:10:31.428 00:10:31.428 ' 00:10:31.428 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:31.428 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:31.428 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:31.428 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:31.428 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:31.428 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:31.428 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:31.428 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:31.428 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:31.428 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:31.428 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:31.428 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:31.428 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:10:31.428 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=806d442c-08ba-4719-b76d-33169283459a 00:10:31.428 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:31.428 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:31.428 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:31.428 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:31.428 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:31.428 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:10:31.428 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:31.428 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:31.428 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:31.428 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.428 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.428 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.428 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:31.428 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.428 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:10:31.428 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:31.428 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:31.428 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:31.429 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:31.429 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:31.429 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:31.429 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:31.429 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:31.429 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:31.429 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:31.429 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:31.429 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:31.429 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:31.429 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:31.429 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:31.429 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:31.429 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:31.429 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:31.429 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:31.429 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:31.429 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:31.429 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:31.429 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:31.429 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:31.429 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:31.429 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:31.429 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:31.429 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:31.429 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:31.429 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:31.429 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:31.429 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:31.429 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:31.429 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:31.429 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:31.429 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:31.429 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:31.429 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:31.429 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:31.429 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:31.429 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:31.429 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:31.429 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:31.429 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:31.429 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:31.429 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:31.429 Cannot find device "nvmf_init_br" 00:10:31.429 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:10:31.429 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:31.429 Cannot find device "nvmf_init_br2" 00:10:31.429 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:10:31.429 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:31.429 Cannot find device "nvmf_tgt_br" 00:10:31.429 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:10:31.429 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:31.429 Cannot find device "nvmf_tgt_br2" 00:10:31.429 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:10:31.429 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:31.429 Cannot find device "nvmf_init_br" 00:10:31.429 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:10:31.429 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:31.429 Cannot find device "nvmf_init_br2" 00:10:31.429 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:10:31.429 11:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:31.429 Cannot find device "nvmf_tgt_br" 00:10:31.429 11:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:10:31.429 11:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:31.689 Cannot find device "nvmf_tgt_br2" 00:10:31.689 11:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:10:31.689 11:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:31.689 Cannot find device "nvmf_br" 00:10:31.689 11:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:10:31.689 11:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:31.689 Cannot find device "nvmf_init_if" 00:10:31.689 11:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:10:31.689 11:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:31.689 Cannot find device "nvmf_init_if2" 00:10:31.689 11:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:10:31.689 11:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:31.689 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:31.689 11:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:10:31.689 11:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:31.689 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:31.689 11:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:10:31.689 11:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:31.689 11:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:31.689 11:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:31.689 11:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:31.689 11:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:31.689 11:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:31.689 11:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:31.689 11:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:31.689 11:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:31.689 11:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:31.689 11:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:31.689 11:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:31.689 11:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:31.689 11:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:31.689 11:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:31.689 11:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:31.689 11:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:31.689 11:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:31.689 11:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:31.689 11:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:31.689 11:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:31.689 11:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:31.689 11:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:31.689 11:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:31.689 11:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:31.689 11:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:31.689 11:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:31.689 11:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:31.689 11:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:31.689 11:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:31.689 11:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:31.689 11:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:31.689 11:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:31.949 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:31.949 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.092 ms 00:10:31.949 00:10:31.949 --- 10.0.0.3 ping statistics --- 00:10:31.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:31.949 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:10:31.949 11:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:31.949 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:31.949 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:10:31.949 00:10:31.949 --- 10.0.0.4 ping statistics --- 00:10:31.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:31.949 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:10:31.949 11:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:31.949 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:31.949 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:10:31.949 00:10:31.949 --- 10.0.0.1 ping statistics --- 00:10:31.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:31.949 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:10:31.949 11:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:31.949 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:31.949 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:10:31.949 00:10:31.949 --- 10.0.0.2 ping statistics --- 00:10:31.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:31.949 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:10:31.949 11:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:31.949 11:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:10:31.949 11:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:31.949 11:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:31.949 11:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:31.949 11:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:31.949 11:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:31.949 11:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:31.949 11:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:31.949 11:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:10:31.949 11:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:10:31.949 11:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:10:31.949 11:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:31.949 11:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:31.949 11:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:31.949 11:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=67990 00:10:31.949 11:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:31.949 11:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 67990 00:10:31.949 11:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 67990 ']' 00:10:31.949 11:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:31.949 11:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:31.949 11:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:31.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:31.949 11:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:31.949 11:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:31.949 [2024-11-20 11:23:17.384676] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:10:31.949 [2024-11-20 11:23:17.385424] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:32.208 [2024-11-20 11:23:17.544128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:32.208 [2024-11-20 11:23:17.587512] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:32.208 [2024-11-20 11:23:17.587605] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:32.208 [2024-11-20 11:23:17.587633] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:32.208 [2024-11-20 11:23:17.587645] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:32.208 [2024-11-20 11:23:17.587654] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:32.208 [2024-11-20 11:23:17.588740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:32.208 [2024-11-20 11:23:17.588879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:32.208 [2024-11-20 11:23:17.588919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:32.208 [2024-11-20 11:23:17.588923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.208 11:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:32.208 11:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0 00:10:32.208 11:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:32.208 11:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:32.208 11:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:32.208 11:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:32.208 11:23:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:32.774 [2024-11-20 11:23:18.078887] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:32.774 11:23:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:10:33.033 Malloc0 00:10:33.033 11:23:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:10:33.292 11:23:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:33.551 11:23:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:33.809 [2024-11-20 11:23:19.286429] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:33.809 11:23:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:10:34.066 [2024-11-20 11:23:19.614796] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:10:34.066 11:23:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid=806d442c-08ba-4719-b76d-33169283459a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:10:34.324 11:23:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid=806d442c-08ba-4719-b76d-33169283459a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:10:34.583 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:10:34.583 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0 00:10:34.583 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:34.583 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:34.583 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2 00:10:36.486 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:36.486 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:36.486 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:36.744 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:36.744 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:36.744 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0 00:10:36.744 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:10:36.744 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:10:36.744 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:10:36.744 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:36.745 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:10:36.745 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:10:36.745 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:10:36.745 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:10:36.745 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:10:36.745 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:10:36.745 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:10:36.745 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:10:36.745 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:10:36.745 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:10:36.745 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:10:36.745 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:36.745 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:36.745 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:36.745 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:36.745 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:10:36.745 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:10:36.745 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:36.745 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:36.745 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:36.745 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:36.745 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:10:36.745 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=68114 00:10:36.745 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:10:36.745 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:10:36.745 [global] 00:10:36.745 thread=1 00:10:36.745 invalidate=1 00:10:36.745 rw=randrw 00:10:36.745 time_based=1 00:10:36.745 runtime=6 00:10:36.745 ioengine=libaio 00:10:36.745 direct=1 00:10:36.745 bs=4096 00:10:36.745 iodepth=128 00:10:36.745 norandommap=0 00:10:36.745 numjobs=1 00:10:36.745 00:10:36.745 verify_dump=1 00:10:36.745 verify_backlog=512 00:10:36.745 verify_state_save=0 00:10:36.745 do_verify=1 00:10:36.745 verify=crc32c-intel 00:10:36.745 [job0] 00:10:36.745 filename=/dev/nvme0n1 00:10:36.745 Could not set queue depth (nvme0n1) 00:10:36.745 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:36.745 fio-3.35 00:10:36.745 Starting 1 thread 00:10:37.680 11:23:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:10:37.937 11:23:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:10:38.195 11:23:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:10:38.195 11:23:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:10:38.195 11:23:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:38.195 11:23:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:38.195 11:23:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:38.195 11:23:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:38.195 11:23:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:10:38.195 11:23:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:10:38.195 11:23:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:38.195 11:23:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:38.195 11:23:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:38.195 11:23:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:38.195 11:23:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:10:39.636 11:23:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:10:39.636 11:23:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:39.636 11:23:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:39.636 11:23:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:10:39.636 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:10:39.893 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:10:39.893 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:10:39.893 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:39.893 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:39.893 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:39.893 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:39.893 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:10:39.893 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:10:39.894 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:39.894 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:39.894 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:39.894 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:39.894 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:10:40.829 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:10:40.829 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:40.829 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:40.829 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 68114 00:10:43.356 00:10:43.356 job0: (groupid=0, jobs=1): err= 0: pid=68139: Wed Nov 20 11:23:28 2024 00:10:43.356 read: IOPS=10.5k, BW=41.2MiB/s (43.2MB/s)(247MiB/5999msec) 00:10:43.356 slat (usec): min=2, max=6381, avg=54.18, stdev=247.77 00:10:43.356 clat (usec): min=772, max=19495, avg=8238.28, stdev=1324.70 00:10:43.356 lat (usec): min=864, max=19502, avg=8292.46, stdev=1335.62 00:10:43.356 clat percentiles (usec): 00:10:43.356 | 1.00th=[ 4883], 5.00th=[ 6325], 10.00th=[ 6980], 20.00th=[ 7439], 00:10:43.356 | 30.00th=[ 7635], 40.00th=[ 7832], 50.00th=[ 8094], 60.00th=[ 8356], 00:10:43.356 | 70.00th=[ 8717], 80.00th=[ 9110], 90.00th=[ 9765], 95.00th=[10552], 00:10:43.356 | 99.00th=[12256], 99.50th=[12780], 99.90th=[14222], 99.95th=[15008], 00:10:43.356 | 99.99th=[18482] 00:10:43.356 bw ( KiB/s): min= 9616, max=27520, per=53.12%, avg=22392.73, stdev=6014.74, samples=11 00:10:43.356 iops : min= 2404, max= 6880, avg=5598.18, stdev=1503.69, samples=11 00:10:43.356 write: IOPS=6386, BW=24.9MiB/s (26.2MB/s)(133MiB/5330msec); 0 zone resets 00:10:43.356 slat (usec): min=12, max=2748, avg=65.09, stdev=165.68 00:10:43.356 clat (usec): min=731, max=17156, avg=7056.46, stdev=1135.69 00:10:43.356 lat (usec): min=777, max=17199, avg=7121.55, stdev=1141.16 00:10:43.356 clat percentiles (usec): 00:10:43.357 | 1.00th=[ 3720], 5.00th=[ 5080], 10.00th=[ 5997], 20.00th=[ 6456], 00:10:43.357 | 30.00th=[ 6718], 40.00th=[ 6915], 50.00th=[ 7111], 60.00th=[ 7242], 00:10:43.357 | 70.00th=[ 7439], 80.00th=[ 7701], 90.00th=[ 8094], 95.00th=[ 8586], 00:10:43.357 | 99.00th=[10552], 99.50th=[11600], 99.90th=[13173], 99.95th=[14353], 00:10:43.357 | 99.99th=[16188] 00:10:43.357 bw ( KiB/s): min=10168, max=27760, per=87.70%, avg=22402.18, stdev=5717.64, samples=11 00:10:43.357 iops : min= 2542, max= 6940, avg=5600.55, stdev=1429.41, samples=11 00:10:43.357 lat (usec) : 750=0.01%, 1000=0.01% 00:10:43.357 lat (msec) : 2=0.08%, 4=0.58%, 10=93.66%, 20=5.67% 00:10:43.357 cpu : usr=5.60%, sys=22.44%, ctx=6282, majf=0, minf=102 00:10:43.357 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:10:43.357 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:43.357 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:43.357 issued rwts: total=63216,34039,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:43.357 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:43.357 00:10:43.357 Run status group 0 (all jobs): 00:10:43.357 READ: bw=41.2MiB/s (43.2MB/s), 41.2MiB/s-41.2MiB/s (43.2MB/s-43.2MB/s), io=247MiB (259MB), run=5999-5999msec 00:10:43.357 WRITE: bw=24.9MiB/s (26.2MB/s), 24.9MiB/s-24.9MiB/s (26.2MB/s-26.2MB/s), io=133MiB (139MB), run=5330-5330msec 00:10:43.357 00:10:43.357 Disk stats (read/write): 00:10:43.357 nvme0n1: ios=62424/33291, merge=0/0, ticks=482245/219912, in_queue=702157, util=98.55% 00:10:43.357 11:23:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:10:43.357 11:23:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:10:43.615 11:23:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:10:43.615 11:23:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:10:43.615 11:23:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:43.615 11:23:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:43.615 11:23:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:43.615 11:23:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:43.615 11:23:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:10:43.615 11:23:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:10:43.615 11:23:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:43.615 11:23:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:43.615 11:23:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:43.615 11:23:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:10:43.615 11:23:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:10:44.591 11:23:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:10:44.591 11:23:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:44.591 11:23:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:44.591 11:23:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:10:44.591 11:23:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=68275 00:10:44.591 11:23:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:10:44.591 11:23:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:10:44.591 [global] 00:10:44.591 thread=1 00:10:44.591 invalidate=1 00:10:44.591 rw=randrw 00:10:44.591 time_based=1 00:10:44.591 runtime=6 00:10:44.591 ioengine=libaio 00:10:44.591 direct=1 00:10:44.591 bs=4096 00:10:44.591 iodepth=128 00:10:44.591 norandommap=0 00:10:44.591 numjobs=1 00:10:44.591 00:10:44.849 verify_dump=1 00:10:44.849 verify_backlog=512 00:10:44.849 verify_state_save=0 00:10:44.849 do_verify=1 00:10:44.849 verify=crc32c-intel 00:10:44.849 [job0] 00:10:44.849 filename=/dev/nvme0n1 00:10:44.849 Could not set queue depth (nvme0n1) 00:10:44.849 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:44.849 fio-3.35 00:10:44.849 Starting 1 thread 00:10:45.787 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:10:46.045 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:10:46.303 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:10:46.303 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:10:46.303 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:46.303 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:46.303 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:46.303 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:46.303 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:10:46.303 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:10:46.303 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:46.303 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:46.303 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:46.303 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:46.303 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:10:47.236 11:23:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:10:47.236 11:23:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:47.236 11:23:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:47.236 11:23:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:10:47.801 11:23:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:10:48.060 11:23:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:10:48.060 11:23:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:10:48.060 11:23:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:48.060 11:23:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:48.060 11:23:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:48.060 11:23:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:48.060 11:23:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:10:48.060 11:23:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:10:48.060 11:23:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:48.060 11:23:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:48.060 11:23:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:48.060 11:23:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:48.060 11:23:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:10:48.995 11:23:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:10:48.995 11:23:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:48.995 11:23:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:48.995 11:23:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 68275 00:10:50.898 00:10:50.898 job0: (groupid=0, jobs=1): err= 0: pid=68296: Wed Nov 20 11:23:36 2024 00:10:50.898 read: IOPS=10.4k, BW=40.7MiB/s (42.6MB/s)(245MiB/6020msec) 00:10:50.898 slat (usec): min=2, max=6744, avg=48.33, stdev=241.41 00:10:50.898 clat (usec): min=236, max=52659, avg=8396.10, stdev=3249.44 00:10:50.898 lat (usec): min=264, max=52680, avg=8444.43, stdev=3268.12 00:10:50.898 clat percentiles (usec): 00:10:50.898 | 1.00th=[ 1188], 5.00th=[ 2933], 10.00th=[ 4621], 20.00th=[ 6259], 00:10:50.898 | 30.00th=[ 7439], 40.00th=[ 7898], 50.00th=[ 8455], 60.00th=[ 9110], 00:10:50.898 | 70.00th=[ 9634], 80.00th=[10028], 90.00th=[11207], 95.00th=[12518], 00:10:50.898 | 99.00th=[19268], 99.50th=[20579], 99.90th=[32637], 99.95th=[39060], 00:10:50.898 | 99.99th=[47449] 00:10:50.898 bw ( KiB/s): min=15728, max=32032, per=57.96%, avg=24130.18, stdev=4944.83, samples=11 00:10:50.898 iops : min= 3932, max= 8008, avg=6032.55, stdev=1236.21, samples=11 00:10:50.898 write: IOPS=6310, BW=24.6MiB/s (25.8MB/s)(130MiB/5271msec); 0 zone resets 00:10:50.898 slat (usec): min=4, max=4979, avg=61.14, stdev=165.53 00:10:50.898 clat (usec): min=150, max=44336, avg=7157.48, stdev=3438.94 00:10:50.898 lat (usec): min=196, max=47297, avg=7218.62, stdev=3458.15 00:10:50.898 clat percentiles (usec): 00:10:50.898 | 1.00th=[ 701], 5.00th=[ 1844], 10.00th=[ 3163], 20.00th=[ 4555], 00:10:50.898 | 30.00th=[ 5997], 40.00th=[ 6783], 50.00th=[ 7242], 60.00th=[ 7635], 00:10:50.898 | 70.00th=[ 8160], 80.00th=[ 8979], 90.00th=[ 9896], 95.00th=[12518], 00:10:50.898 | 99.00th=[17433], 99.50th=[20055], 99.90th=[34866], 99.95th=[39584], 00:10:50.898 | 99.99th=[44303] 00:10:50.898 bw ( KiB/s): min=16360, max=32792, per=95.68%, avg=24150.55, stdev=4845.02, samples=11 00:10:50.898 iops : min= 4090, max= 8198, avg=6037.64, stdev=1211.25, samples=11 00:10:50.898 lat (usec) : 250=0.02%, 500=0.21%, 750=0.40%, 1000=0.52% 00:10:50.898 lat (msec) : 2=2.50%, 4=6.90%, 10=72.70%, 20=16.13%, 50=0.62% 00:10:50.898 lat (msec) : 100=0.01% 00:10:50.898 cpu : usr=5.36%, sys=25.01%, ctx=7498, majf=0, minf=127 00:10:50.898 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:10:50.898 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.898 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:50.898 issued rwts: total=62661,33262,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:50.898 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:50.898 00:10:50.898 Run status group 0 (all jobs): 00:10:50.898 READ: bw=40.7MiB/s (42.6MB/s), 40.7MiB/s-40.7MiB/s (42.6MB/s-42.6MB/s), io=245MiB (257MB), run=6020-6020msec 00:10:50.898 WRITE: bw=24.6MiB/s (25.8MB/s), 24.6MiB/s-24.6MiB/s (25.8MB/s-25.8MB/s), io=130MiB (136MB), run=5271-5271msec 00:10:50.898 00:10:50.898 Disk stats (read/write): 00:10:50.898 nvme0n1: ios=62456/32825, merge=0/0, ticks=485034/211077, in_queue=696111, util=98.58% 00:10:50.898 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:51.155 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:51.155 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:51.155 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0 00:10:51.155 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:51.155 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:51.155 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:51.155 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:51.155 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0 00:10:51.155 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:51.413 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:10:51.413 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:10:51.413 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:10:51.413 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:10:51.413 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:51.413 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:51.413 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:51.413 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:51.413 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:51.413 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:51.413 rmmod nvme_tcp 00:10:51.413 rmmod nvme_fabrics 00:10:51.413 rmmod nvme_keyring 00:10:51.413 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:51.672 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:51.672 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:51.672 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 67990 ']' 00:10:51.672 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 67990 00:10:51.672 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 67990 ']' 00:10:51.672 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 67990 00:10:51.672 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname 00:10:51.672 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:51.672 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67990 00:10:51.672 killing process with pid 67990 00:10:51.672 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:51.672 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:51.672 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67990' 00:10:51.672 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 67990 00:10:51.672 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 67990 00:10:51.672 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:51.672 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:51.672 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:51.672 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:51.672 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:10:51.672 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:10:51.672 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:51.672 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:51.672 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:51.672 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:51.672 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:51.672 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:51.672 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:51.930 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:51.930 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:51.930 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:51.930 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:51.930 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:51.930 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:51.930 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:51.930 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:51.930 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:51.930 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:51.931 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:51.931 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:51.931 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:51.931 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:10:51.931 00:10:51.931 real 0m20.742s 00:10:51.931 user 1m20.833s 00:10:51.931 sys 0m6.680s 00:10:51.931 ************************************ 00:10:51.931 END TEST nvmf_target_multipath 00:10:51.931 ************************************ 00:10:51.931 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:51.931 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:51.931 11:23:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:51.931 11:23:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:51.931 11:23:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:51.931 11:23:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:51.931 ************************************ 00:10:51.931 START TEST nvmf_zcopy 00:10:51.931 ************************************ 00:10:51.931 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:52.189 * Looking for test storage... 00:10:52.189 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:52.189 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:52.189 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:52.189 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:10:52.189 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:52.189 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:52.189 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:52.189 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:52.189 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:10:52.189 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:10:52.189 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:10:52.189 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:10:52.189 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:10:52.189 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:10:52.189 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:10:52.189 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:52.189 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:10:52.189 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:10:52.189 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:52.189 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:52.189 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:10:52.189 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:10:52.189 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:52.189 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:10:52.189 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:10:52.189 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:10:52.189 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:10:52.189 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:52.189 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:10:52.189 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:10:52.189 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:52.189 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:52.189 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:10:52.189 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:52.189 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:52.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.189 --rc genhtml_branch_coverage=1 00:10:52.189 --rc genhtml_function_coverage=1 00:10:52.189 --rc genhtml_legend=1 00:10:52.189 --rc geninfo_all_blocks=1 00:10:52.189 --rc geninfo_unexecuted_blocks=1 00:10:52.189 00:10:52.189 ' 00:10:52.189 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:52.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.189 --rc genhtml_branch_coverage=1 00:10:52.189 --rc genhtml_function_coverage=1 00:10:52.189 --rc genhtml_legend=1 00:10:52.189 --rc geninfo_all_blocks=1 00:10:52.189 --rc geninfo_unexecuted_blocks=1 00:10:52.189 00:10:52.189 ' 00:10:52.189 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:52.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.189 --rc genhtml_branch_coverage=1 00:10:52.189 --rc genhtml_function_coverage=1 00:10:52.189 --rc genhtml_legend=1 00:10:52.189 --rc geninfo_all_blocks=1 00:10:52.189 --rc geninfo_unexecuted_blocks=1 00:10:52.189 00:10:52.189 ' 00:10:52.189 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:52.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.189 --rc genhtml_branch_coverage=1 00:10:52.189 --rc genhtml_function_coverage=1 00:10:52.189 --rc genhtml_legend=1 00:10:52.189 --rc geninfo_all_blocks=1 00:10:52.189 --rc geninfo_unexecuted_blocks=1 00:10:52.189 00:10:52.189 ' 00:10:52.189 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:52.189 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:52.189 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:52.189 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:52.189 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:52.189 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:52.189 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:52.189 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:52.189 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:52.189 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:52.189 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:52.189 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:52.189 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:10:52.189 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=806d442c-08ba-4719-b76d-33169283459a 00:10:52.189 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:52.189 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:52.189 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:52.189 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:52.190 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:52.190 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:10:52.190 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:52.190 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:52.190 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:52.190 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.190 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.190 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.190 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:52.190 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.190 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:10:52.190 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:52.190 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:52.190 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:52.190 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:52.190 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:52.190 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:52.190 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:52.190 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:52.190 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:52.190 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:52.190 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:52.190 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:52.190 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:52.190 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:52.190 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:52.190 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:52.190 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:52.190 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:52.190 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:52.190 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:52.190 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:52.190 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:52.190 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:52.190 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:52.190 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:52.190 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:52.190 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:52.190 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:52.190 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:52.190 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:52.190 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:52.190 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:52.190 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:52.190 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:52.190 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:52.190 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:52.190 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:52.190 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:52.190 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:52.190 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:52.190 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:52.190 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:52.190 Cannot find device "nvmf_init_br" 00:10:52.190 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:10:52.190 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:52.190 Cannot find device "nvmf_init_br2" 00:10:52.190 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:10:52.190 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:52.190 Cannot find device "nvmf_tgt_br" 00:10:52.190 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:10:52.190 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:52.190 Cannot find device "nvmf_tgt_br2" 00:10:52.190 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:10:52.190 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:52.190 Cannot find device "nvmf_init_br" 00:10:52.190 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:10:52.190 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:52.190 Cannot find device "nvmf_init_br2" 00:10:52.190 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:10:52.190 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:52.190 Cannot find device "nvmf_tgt_br" 00:10:52.190 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:10:52.190 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:52.190 Cannot find device "nvmf_tgt_br2" 00:10:52.190 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:10:52.190 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:52.190 Cannot find device "nvmf_br" 00:10:52.190 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:10:52.190 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:52.449 Cannot find device "nvmf_init_if" 00:10:52.449 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:10:52.449 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:52.449 Cannot find device "nvmf_init_if2" 00:10:52.449 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:10:52.449 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:52.449 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:52.449 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:10:52.449 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:52.449 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:52.449 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:10:52.449 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:52.449 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:52.449 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:52.449 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:52.449 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:52.449 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:52.449 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:52.449 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:52.449 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:52.449 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:52.449 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:52.449 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:52.449 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:52.449 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:52.449 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:52.449 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:52.449 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:52.449 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:52.449 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:52.449 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:52.449 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:52.449 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:52.449 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:52.449 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:52.449 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:52.449 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:52.449 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:52.449 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:52.449 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:52.449 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:52.449 11:23:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:52.449 11:23:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:52.449 11:23:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:52.449 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:52.449 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:10:52.449 00:10:52.449 --- 10.0.0.3 ping statistics --- 00:10:52.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:52.449 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:10:52.449 11:23:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:52.449 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:52.449 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:10:52.449 00:10:52.449 --- 10.0.0.4 ping statistics --- 00:10:52.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:52.449 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:10:52.449 11:23:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:52.449 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:52.449 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:10:52.449 00:10:52.449 --- 10.0.0.1 ping statistics --- 00:10:52.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:52.449 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:10:52.449 11:23:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:52.449 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:52.449 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:10:52.449 00:10:52.449 --- 10.0.0.2 ping statistics --- 00:10:52.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:52.449 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:10:52.449 11:23:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:52.449 11:23:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:10:52.449 11:23:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:52.449 11:23:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:52.449 11:23:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:52.449 11:23:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:52.449 11:23:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:52.449 11:23:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:52.449 11:23:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:52.708 11:23:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:52.708 11:23:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:52.708 11:23:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:52.708 11:23:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:52.708 11:23:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=68631 00:10:52.708 11:23:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 68631 00:10:52.708 11:23:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:52.708 11:23:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 68631 ']' 00:10:52.708 11:23:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:52.708 11:23:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:52.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:52.708 11:23:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:52.708 11:23:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:52.708 11:23:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:52.708 [2024-11-20 11:23:38.135483] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:10:52.708 [2024-11-20 11:23:38.135605] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:52.708 [2024-11-20 11:23:38.284191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:52.966 [2024-11-20 11:23:38.316256] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:52.966 [2024-11-20 11:23:38.316311] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:52.966 [2024-11-20 11:23:38.316323] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:52.966 [2024-11-20 11:23:38.316332] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:52.966 [2024-11-20 11:23:38.316339] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:52.966 [2024-11-20 11:23:38.316681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:53.900 11:23:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:53.900 11:23:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:10:53.900 11:23:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:53.900 11:23:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:53.900 11:23:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:53.900 11:23:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:53.900 11:23:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:53.900 11:23:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:53.900 11:23:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.900 11:23:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:53.900 [2024-11-20 11:23:39.156759] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:53.900 11:23:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.900 11:23:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:53.900 11:23:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.900 11:23:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:53.900 11:23:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.900 11:23:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:53.900 11:23:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.900 11:23:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:53.900 [2024-11-20 11:23:39.172975] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:53.900 11:23:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.900 11:23:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:53.900 11:23:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.900 11:23:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:53.900 11:23:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.900 11:23:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:53.900 11:23:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.900 11:23:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:53.900 malloc0 00:10:53.900 11:23:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.900 11:23:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:53.900 11:23:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.900 11:23:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:53.900 11:23:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.900 11:23:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:53.900 11:23:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:53.900 11:23:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:53.900 11:23:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:53.900 11:23:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:53.900 11:23:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:53.900 { 00:10:53.900 "params": { 00:10:53.900 "name": "Nvme$subsystem", 00:10:53.900 "trtype": "$TEST_TRANSPORT", 00:10:53.900 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:53.900 "adrfam": "ipv4", 00:10:53.900 "trsvcid": "$NVMF_PORT", 00:10:53.900 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:53.900 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:53.900 "hdgst": ${hdgst:-false}, 00:10:53.900 "ddgst": ${ddgst:-false} 00:10:53.900 }, 00:10:53.900 "method": "bdev_nvme_attach_controller" 00:10:53.900 } 00:10:53.900 EOF 00:10:53.900 )") 00:10:53.900 11:23:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:53.900 11:23:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:53.900 11:23:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:53.900 11:23:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:53.900 "params": { 00:10:53.900 "name": "Nvme1", 00:10:53.900 "trtype": "tcp", 00:10:53.900 "traddr": "10.0.0.3", 00:10:53.900 "adrfam": "ipv4", 00:10:53.900 "trsvcid": "4420", 00:10:53.900 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:53.900 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:53.900 "hdgst": false, 00:10:53.900 "ddgst": false 00:10:53.900 }, 00:10:53.900 "method": "bdev_nvme_attach_controller" 00:10:53.900 }' 00:10:53.900 [2024-11-20 11:23:39.254462] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:10:53.900 [2024-11-20 11:23:39.255197] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68682 ] 00:10:53.900 [2024-11-20 11:23:39.404236] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:53.900 [2024-11-20 11:23:39.438656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.158 Running I/O for 10 seconds... 00:10:56.026 5475.00 IOPS, 42.77 MiB/s [2024-11-20T11:23:42.990Z] 5509.50 IOPS, 43.04 MiB/s [2024-11-20T11:23:43.926Z] 5611.67 IOPS, 43.84 MiB/s [2024-11-20T11:23:44.861Z] 5677.75 IOPS, 44.36 MiB/s [2024-11-20T11:23:45.794Z] 5716.60 IOPS, 44.66 MiB/s [2024-11-20T11:23:46.728Z] 5669.50 IOPS, 44.29 MiB/s [2024-11-20T11:23:47.660Z] 5665.43 IOPS, 44.26 MiB/s [2024-11-20T11:23:48.593Z] 5648.75 IOPS, 44.13 MiB/s [2024-11-20T11:23:50.023Z] 5659.11 IOPS, 44.21 MiB/s [2024-11-20T11:23:50.023Z] 5666.50 IOPS, 44.27 MiB/s 00:11:04.434 Latency(us) 00:11:04.434 [2024-11-20T11:23:50.023Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:04.434 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:11:04.434 Verification LBA range: start 0x0 length 0x1000 00:11:04.434 Nvme1n1 : 10.02 5668.90 44.29 0.00 0.00 22507.84 2576.76 33602.09 00:11:04.434 [2024-11-20T11:23:50.023Z] =================================================================================================================== 00:11:04.434 [2024-11-20T11:23:50.023Z] Total : 5668.90 44.29 0.00 0.00 22507.84 2576.76 33602.09 00:11:04.434 11:23:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=68799 00:11:04.434 11:23:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:11:04.434 11:23:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:04.434 11:23:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:11:04.434 11:23:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:11:04.434 11:23:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:11:04.434 11:23:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:11:04.434 11:23:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:04.434 11:23:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:04.434 { 00:11:04.434 "params": { 00:11:04.434 "name": "Nvme$subsystem", 00:11:04.434 "trtype": "$TEST_TRANSPORT", 00:11:04.434 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:04.434 "adrfam": "ipv4", 00:11:04.434 "trsvcid": "$NVMF_PORT", 00:11:04.434 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:04.434 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:04.434 "hdgst": ${hdgst:-false}, 00:11:04.434 "ddgst": ${ddgst:-false} 00:11:04.434 }, 00:11:04.434 "method": "bdev_nvme_attach_controller" 00:11:04.434 } 00:11:04.434 EOF 00:11:04.434 )") 00:11:04.434 11:23:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:11:04.434 [2024-11-20 11:23:49.745051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.434 [2024-11-20 11:23:49.745105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.434 11:23:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:11:04.434 11:23:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:11:04.434 2024/11/20 11:23:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.434 11:23:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:04.434 "params": { 00:11:04.434 "name": "Nvme1", 00:11:04.434 "trtype": "tcp", 00:11:04.434 "traddr": "10.0.0.3", 00:11:04.434 "adrfam": "ipv4", 00:11:04.434 "trsvcid": "4420", 00:11:04.434 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:04.434 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:04.434 "hdgst": false, 00:11:04.434 "ddgst": false 00:11:04.434 }, 00:11:04.434 "method": "bdev_nvme_attach_controller" 00:11:04.434 }' 00:11:04.434 [2024-11-20 11:23:49.753079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.434 [2024-11-20 11:23:49.753141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.434 2024/11/20 11:23:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.434 [2024-11-20 11:23:49.765090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.434 [2024-11-20 11:23:49.765157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.435 2024/11/20 11:23:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.435 [2024-11-20 11:23:49.777102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.435 [2024-11-20 11:23:49.777171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.435 2024/11/20 11:23:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.435 [2024-11-20 11:23:49.789093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.435 [2024-11-20 11:23:49.789164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.435 2024/11/20 11:23:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.435 [2024-11-20 11:23:49.801091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.435 [2024-11-20 11:23:49.801157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.435 2024/11/20 11:23:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.435 [2024-11-20 11:23:49.810040] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:11:04.435 [2024-11-20 11:23:49.810160] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68799 ] 00:11:04.435 [2024-11-20 11:23:49.813069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.435 [2024-11-20 11:23:49.813112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.435 2024/11/20 11:23:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.435 [2024-11-20 11:23:49.825069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.435 [2024-11-20 11:23:49.825125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.435 2024/11/20 11:23:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.435 [2024-11-20 11:23:49.837067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.435 [2024-11-20 11:23:49.837117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.435 2024/11/20 11:23:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.435 [2024-11-20 11:23:49.849109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.435 [2024-11-20 11:23:49.849157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.435 2024/11/20 11:23:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.435 [2024-11-20 11:23:49.861100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.435 [2024-11-20 11:23:49.861154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.435 2024/11/20 11:23:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.435 [2024-11-20 11:23:49.873107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.435 [2024-11-20 11:23:49.873166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.435 2024/11/20 11:23:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.435 [2024-11-20 11:23:49.885097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.435 [2024-11-20 11:23:49.885148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.435 2024/11/20 11:23:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.435 [2024-11-20 11:23:49.897081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.435 [2024-11-20 11:23:49.897123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.435 2024/11/20 11:23:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.435 [2024-11-20 11:23:49.909100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.435 [2024-11-20 11:23:49.909152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.435 2024/11/20 11:23:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.435 [2024-11-20 11:23:49.921069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.435 [2024-11-20 11:23:49.921105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.435 2024/11/20 11:23:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.435 [2024-11-20 11:23:49.933083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.435 [2024-11-20 11:23:49.933121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.435 2024/11/20 11:23:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.435 [2024-11-20 11:23:49.941073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.435 [2024-11-20 11:23:49.941108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.435 2024/11/20 11:23:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.435 [2024-11-20 11:23:49.949070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.435 [2024-11-20 11:23:49.949104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.435 2024/11/20 11:23:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.435 [2024-11-20 11:23:49.961128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.435 [2024-11-20 11:23:49.961190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.435 [2024-11-20 11:23:49.961734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:04.435 2024/11/20 11:23:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.435 [2024-11-20 11:23:49.973117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.436 [2024-11-20 11:23:49.973169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.436 2024/11/20 11:23:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.436 [2024-11-20 11:23:49.985115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.436 [2024-11-20 11:23:49.985160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.436 2024/11/20 11:23:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.436 [2024-11-20 11:23:49.996122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.436 [2024-11-20 11:23:49.997116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.436 [2024-11-20 11:23:49.997154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.436 2024/11/20 11:23:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.436 [2024-11-20 11:23:50.009137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.436 [2024-11-20 11:23:50.009188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.436 2024/11/20 11:23:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.694 [2024-11-20 11:23:50.021140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.694 [2024-11-20 11:23:50.021193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.694 2024/11/20 11:23:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.694 [2024-11-20 11:23:50.033128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.694 [2024-11-20 11:23:50.033178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.694 2024/11/20 11:23:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.694 [2024-11-20 11:23:50.045144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.694 [2024-11-20 11:23:50.045197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.694 2024/11/20 11:23:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.694 [2024-11-20 11:23:50.057124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.694 [2024-11-20 11:23:50.057166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.694 2024/11/20 11:23:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.694 [2024-11-20 11:23:50.069159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.694 [2024-11-20 11:23:50.069216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.694 2024/11/20 11:23:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.694 [2024-11-20 11:23:50.081187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.694 [2024-11-20 11:23:50.081251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.694 2024/11/20 11:23:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.694 [2024-11-20 11:23:50.089113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.694 [2024-11-20 11:23:50.089150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.694 2024/11/20 11:23:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.694 [2024-11-20 11:23:50.097124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.694 [2024-11-20 11:23:50.097164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.694 2024/11/20 11:23:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.694 [2024-11-20 11:23:50.105140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.694 [2024-11-20 11:23:50.105185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.694 2024/11/20 11:23:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.694 [2024-11-20 11:23:50.117151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.694 [2024-11-20 11:23:50.117197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.694 2024/11/20 11:23:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.694 [2024-11-20 11:23:50.129172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.694 [2024-11-20 11:23:50.129231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.694 2024/11/20 11:23:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.694 Running I/O for 5 seconds... 00:11:04.694 [2024-11-20 11:23:50.145039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.694 [2024-11-20 11:23:50.145101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.694 2024/11/20 11:23:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.695 [2024-11-20 11:23:50.163422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.695 [2024-11-20 11:23:50.163756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.695 2024/11/20 11:23:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.695 [2024-11-20 11:23:50.178637] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.695 [2024-11-20 11:23:50.178686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.695 2024/11/20 11:23:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.695 [2024-11-20 11:23:50.189052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.695 [2024-11-20 11:23:50.189092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.695 2024/11/20 11:23:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.695 [2024-11-20 11:23:50.203609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.695 [2024-11-20 11:23:50.203683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.695 2024/11/20 11:23:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.695 [2024-11-20 11:23:50.216572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.695 [2024-11-20 11:23:50.216639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.695 2024/11/20 11:23:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.695 [2024-11-20 11:23:50.227914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.695 [2024-11-20 11:23:50.227959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.695 2024/11/20 11:23:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.695 [2024-11-20 11:23:50.241003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.695 [2024-11-20 11:23:50.241047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.695 2024/11/20 11:23:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.695 [2024-11-20 11:23:50.251456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.695 [2024-11-20 11:23:50.251496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.695 2024/11/20 11:23:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.695 [2024-11-20 11:23:50.262958] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.695 [2024-11-20 11:23:50.263009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.695 2024/11/20 11:23:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.695 [2024-11-20 11:23:50.276582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.695 [2024-11-20 11:23:50.276647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.695 2024/11/20 11:23:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.953 [2024-11-20 11:23:50.287230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.953 [2024-11-20 11:23:50.287283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.953 2024/11/20 11:23:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.953 [2024-11-20 11:23:50.308576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.953 [2024-11-20 11:23:50.308648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.953 2024/11/20 11:23:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.953 [2024-11-20 11:23:50.329204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.953 [2024-11-20 11:23:50.329259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.953 2024/11/20 11:23:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.953 [2024-11-20 11:23:50.345542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.953 [2024-11-20 11:23:50.345588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.953 2024/11/20 11:23:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.953 [2024-11-20 11:23:50.361059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.953 [2024-11-20 11:23:50.361102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.953 2024/11/20 11:23:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.953 [2024-11-20 11:23:50.371976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.953 [2024-11-20 11:23:50.372018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.953 2024/11/20 11:23:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.953 [2024-11-20 11:23:50.387161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.953 [2024-11-20 11:23:50.387212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.953 2024/11/20 11:23:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.953 [2024-11-20 11:23:50.404109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.953 [2024-11-20 11:23:50.404160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.953 2024/11/20 11:23:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.953 [2024-11-20 11:23:50.419901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.953 [2024-11-20 11:23:50.419955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.953 2024/11/20 11:23:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.953 [2024-11-20 11:23:50.430870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.953 [2024-11-20 11:23:50.430941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.953 2024/11/20 11:23:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.953 [2024-11-20 11:23:50.445858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.953 [2024-11-20 11:23:50.446099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.953 2024/11/20 11:23:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.953 [2024-11-20 11:23:50.463253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.953 [2024-11-20 11:23:50.463324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.953 2024/11/20 11:23:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.954 [2024-11-20 11:23:50.478330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.954 [2024-11-20 11:23:50.478388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.954 2024/11/20 11:23:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.954 [2024-11-20 11:23:50.494404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.954 [2024-11-20 11:23:50.494472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.954 2024/11/20 11:23:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.954 [2024-11-20 11:23:50.511026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.954 [2024-11-20 11:23:50.511108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.954 2024/11/20 11:23:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.954 [2024-11-20 11:23:50.528544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.954 [2024-11-20 11:23:50.528628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.954 2024/11/20 11:23:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:05.212 [2024-11-20 11:23:50.544787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.212 [2024-11-20 11:23:50.544849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.212 2024/11/20 11:23:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:05.212 [2024-11-20 11:23:50.560870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.212 [2024-11-20 11:23:50.560933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.212 2024/11/20 11:23:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:05.212 [2024-11-20 11:23:50.571194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.212 [2024-11-20 11:23:50.571265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.212 2024/11/20 11:23:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:05.212 [2024-11-20 11:23:50.586328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.212 [2024-11-20 11:23:50.586400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.212 2024/11/20 11:23:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:05.212 [2024-11-20 11:23:50.602289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.212 [2024-11-20 11:23:50.602358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.212 2024/11/20 11:23:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:05.212 [2024-11-20 11:23:50.619079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.212 [2024-11-20 11:23:50.619152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.212 2024/11/20 11:23:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:05.212 [2024-11-20 11:23:50.635549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.212 [2024-11-20 11:23:50.635631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.212 2024/11/20 11:23:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:05.212 [2024-11-20 11:23:50.651453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.212 [2024-11-20 11:23:50.651511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.212 2024/11/20 11:23:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:05.212 [2024-11-20 11:23:50.661566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.212 [2024-11-20 11:23:50.661638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.212 2024/11/20 11:23:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:05.212 [2024-11-20 11:23:50.676677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.212 [2024-11-20 11:23:50.676744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.212 2024/11/20 11:23:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:05.212 [2024-11-20 11:23:50.692802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.212 [2024-11-20 11:23:50.692856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.213 2024/11/20 11:23:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:05.213 [2024-11-20 11:23:50.710844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.213 [2024-11-20 11:23:50.710911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.213 2024/11/20 11:23:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:05.213 [2024-11-20 11:23:50.726666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.213 [2024-11-20 11:23:50.726729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.213 2024/11/20 11:23:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:05.213 [2024-11-20 11:23:50.744120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.213 [2024-11-20 11:23:50.744445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.213 2024/11/20 11:23:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:05.213 [2024-11-20 11:23:50.759650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.213 [2024-11-20 11:23:50.759715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.213 2024/11/20 11:23:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:05.213 [2024-11-20 11:23:50.778202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.213 [2024-11-20 11:23:50.778269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.213 2024/11/20 11:23:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:05.213 [2024-11-20 11:23:50.794162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.213 [2024-11-20 11:23:50.794231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.213 2024/11/20 11:23:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:05.471 [2024-11-20 11:23:50.809771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.471 [2024-11-20 11:23:50.809846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.471 2024/11/20 11:23:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:05.471 [2024-11-20 11:23:50.826033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.471 [2024-11-20 11:23:50.826101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.471 2024/11/20 11:23:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:05.471 [2024-11-20 11:23:50.842927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.471 [2024-11-20 11:23:50.843246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.471 2024/11/20 11:23:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:05.471 [2024-11-20 11:23:50.859142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.471 [2024-11-20 11:23:50.859200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.471 2024/11/20 11:23:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:05.471 [2024-11-20 11:23:50.875678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.471 [2024-11-20 11:23:50.875728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.471 2024/11/20 11:23:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:05.471 [2024-11-20 11:23:50.891874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.471 [2024-11-20 11:23:50.891944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.472 2024/11/20 11:23:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:05.472 [2024-11-20 11:23:50.909668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.472 [2024-11-20 11:23:50.909735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.472 2024/11/20 11:23:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:05.472 [2024-11-20 11:23:50.924366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.472 [2024-11-20 11:23:50.924432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.472 2024/11/20 11:23:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:05.472 [2024-11-20 11:23:50.939731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.472 [2024-11-20 11:23:50.939797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.472 2024/11/20 11:23:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:05.472 [2024-11-20 11:23:50.956026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.472 [2024-11-20 11:23:50.956094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.472 2024/11/20 11:23:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:05.472 [2024-11-20 11:23:50.972121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.472 [2024-11-20 11:23:50.972191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.472 2024/11/20 11:23:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:05.472 [2024-11-20 11:23:50.982391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.472 [2024-11-20 11:23:50.982456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.472 2024/11/20 11:23:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:05.472 [2024-11-20 11:23:50.997885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.472 [2024-11-20 11:23:50.997952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.472 2024/11/20 11:23:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:05.472 [2024-11-20 11:23:51.014063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.472 [2024-11-20 11:23:51.014132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.472 2024/11/20 11:23:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:05.472 [2024-11-20 11:23:51.030885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.472 [2024-11-20 11:23:51.030952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.472 2024/11/20 11:23:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:05.472 [2024-11-20 11:23:51.047644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.472 [2024-11-20 11:23:51.047710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.472 2024/11/20 11:23:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:05.731 [2024-11-20 11:23:51.062721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.731 [2024-11-20 11:23:51.062785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.731 2024/11/20 11:23:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:05.731 [2024-11-20 11:23:51.079251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.731 [2024-11-20 11:23:51.079321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.731 2024/11/20 11:23:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:05.731 [2024-11-20 11:23:51.095538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.731 [2024-11-20 11:23:51.095607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.731 2024/11/20 11:23:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:05.731 [2024-11-20 11:23:51.112267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.731 [2024-11-20 11:23:51.112330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.731 2024/11/20 11:23:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:05.731 [2024-11-20 11:23:51.129411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.731 [2024-11-20 11:23:51.129680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.731 2024/11/20 11:23:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:05.731 11089.00 IOPS, 86.63 MiB/s [2024-11-20T11:23:51.320Z] [2024-11-20 11:23:51.146275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.731 [2024-11-20 11:23:51.146342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.731 2024/11/20 11:23:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:05.731 [2024-11-20 11:23:51.162547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.731 [2024-11-20 11:23:51.162629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.731 2024/11/20 11:23:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:05.731 [2024-11-20 11:23:51.179380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.731 [2024-11-20 11:23:51.179447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.731 2024/11/20 11:23:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:05.731 [2024-11-20 11:23:51.195241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.731 [2024-11-20 11:23:51.195314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.731 2024/11/20 11:23:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:05.731 [2024-11-20 11:23:51.205915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.731 [2024-11-20 11:23:51.205982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.731 2024/11/20 11:23:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:05.731 [2024-11-20 11:23:51.221021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.731 [2024-11-20 11:23:51.221277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.731 2024/11/20 11:23:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:05.731 [2024-11-20 11:23:51.238441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.731 [2024-11-20 11:23:51.238515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.731 2024/11/20 11:23:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:05.731 [2024-11-20 11:23:51.253591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.731 [2024-11-20 11:23:51.253668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.731 2024/11/20 11:23:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:05.731 [2024-11-20 11:23:51.269889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.731 [2024-11-20 11:23:51.269941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.731 2024/11/20 11:23:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:05.731 [2024-11-20 11:23:51.287165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.731 [2024-11-20 11:23:51.287237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.731 2024/11/20 11:23:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:05.731 [2024-11-20 11:23:51.302565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.731 [2024-11-20 11:23:51.302642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.731 2024/11/20 11:23:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:05.990 [2024-11-20 11:23:51.319141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.990 [2024-11-20 11:23:51.319206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.990 2024/11/20 11:23:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:05.990 [2024-11-20 11:23:51.335429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.990 [2024-11-20 11:23:51.335491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.990 2024/11/20 11:23:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:05.990 [2024-11-20 11:23:51.352182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.990 [2024-11-20 11:23:51.352455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.990 2024/11/20 11:23:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:05.990 [2024-11-20 11:23:51.368250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.990 [2024-11-20 11:23:51.368322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.990 2024/11/20 11:23:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:05.990 [2024-11-20 11:23:51.385030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.990 [2024-11-20 11:23:51.385093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.990 2024/11/20 11:23:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:05.990 [2024-11-20 11:23:51.400847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.990 [2024-11-20 11:23:51.400923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.990 2024/11/20 11:23:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:05.990 [2024-11-20 11:23:51.417724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.990 [2024-11-20 11:23:51.417786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.991 2024/11/20 11:23:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:05.991 [2024-11-20 11:23:51.436251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.991 [2024-11-20 11:23:51.436320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.991 2024/11/20 11:23:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:05.991 [2024-11-20 11:23:51.451573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.991 [2024-11-20 11:23:51.451828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.991 2024/11/20 11:23:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:05.991 [2024-11-20 11:23:51.468045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.991 [2024-11-20 11:23:51.468105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.991 2024/11/20 11:23:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:05.991 [2024-11-20 11:23:51.485339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.991 [2024-11-20 11:23:51.485393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.991 2024/11/20 11:23:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:05.991 [2024-11-20 11:23:51.501926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.991 [2024-11-20 11:23:51.501992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.991 2024/11/20 11:23:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:05.991 [2024-11-20 11:23:51.517034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.991 [2024-11-20 11:23:51.517106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.991 2024/11/20 11:23:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:05.991 [2024-11-20 11:23:51.534764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.991 [2024-11-20 11:23:51.534837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.991 2024/11/20 11:23:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:05.991 [2024-11-20 11:23:51.549779] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.991 [2024-11-20 11:23:51.550056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.991 2024/11/20 11:23:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:05.991 [2024-11-20 11:23:51.565749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.991 [2024-11-20 11:23:51.565815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.991 2024/11/20 11:23:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:06.249 [2024-11-20 11:23:51.583424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.249 [2024-11-20 11:23:51.583489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.250 2024/11/20 11:23:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:06.250 [2024-11-20 11:23:51.599676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.250 [2024-11-20 11:23:51.599956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.250 2024/11/20 11:23:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:06.250 [2024-11-20 11:23:51.610746] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.250 [2024-11-20 11:23:51.610830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.250 2024/11/20 11:23:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:06.250 [2024-11-20 11:23:51.621523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.250 [2024-11-20 11:23:51.621590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.250 2024/11/20 11:23:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:06.250 [2024-11-20 11:23:51.637465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.250 [2024-11-20 11:23:51.637524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.250 2024/11/20 11:23:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:06.250 [2024-11-20 11:23:51.647504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.250 [2024-11-20 11:23:51.647567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.250 2024/11/20 11:23:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:06.250 [2024-11-20 11:23:51.662744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.250 [2024-11-20 11:23:51.662803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.250 2024/11/20 11:23:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:06.250 [2024-11-20 11:23:51.679056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.250 [2024-11-20 11:23:51.679124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.250 2024/11/20 11:23:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:06.250 [2024-11-20 11:23:51.695549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.250 [2024-11-20 11:23:51.695627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.250 2024/11/20 11:23:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:06.250 [2024-11-20 11:23:51.706110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.250 [2024-11-20 11:23:51.706163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.250 2024/11/20 11:23:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:06.250 [2024-11-20 11:23:51.721555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.250 [2024-11-20 11:23:51.721633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.250 2024/11/20 11:23:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:06.250 [2024-11-20 11:23:51.736811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.250 [2024-11-20 11:23:51.736881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.250 2024/11/20 11:23:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:06.250 [2024-11-20 11:23:51.753719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.250 [2024-11-20 11:23:51.753789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.250 2024/11/20 11:23:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:06.250 [2024-11-20 11:23:51.770278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.250 [2024-11-20 11:23:51.770350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.250 2024/11/20 11:23:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:06.250 [2024-11-20 11:23:51.780547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.250 [2024-11-20 11:23:51.780590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.250 2024/11/20 11:23:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:06.250 [2024-11-20 11:23:51.792284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.250 [2024-11-20 11:23:51.792333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.250 2024/11/20 11:23:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:06.250 [2024-11-20 11:23:51.804851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.250 [2024-11-20 11:23:51.804901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.250 2024/11/20 11:23:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:06.250 [2024-11-20 11:23:51.820130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.250 [2024-11-20 11:23:51.820212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.250 2024/11/20 11:23:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:06.508 [2024-11-20 11:23:51.836347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.508 [2024-11-20 11:23:51.836419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.509 2024/11/20 11:23:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:06.509 [2024-11-20 11:23:51.852863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.509 [2024-11-20 11:23:51.852931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.509 2024/11/20 11:23:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:06.509 [2024-11-20 11:23:51.872149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.509 [2024-11-20 11:23:51.872221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.509 2024/11/20 11:23:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:06.509 [2024-11-20 11:23:51.888121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.509 [2024-11-20 11:23:51.888181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.509 2024/11/20 11:23:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:06.509 [2024-11-20 11:23:51.904861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.509 [2024-11-20 11:23:51.904927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.509 2024/11/20 11:23:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:06.509 [2024-11-20 11:23:51.920012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.509 [2024-11-20 11:23:51.920079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.509 2024/11/20 11:23:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:06.509 [2024-11-20 11:23:51.936781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.509 [2024-11-20 11:23:51.936857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.509 2024/11/20 11:23:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:06.509 [2024-11-20 11:23:51.952060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.509 [2024-11-20 11:23:51.952138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.509 2024/11/20 11:23:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:06.509 [2024-11-20 11:23:51.968158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.509 [2024-11-20 11:23:51.968241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.509 2024/11/20 11:23:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:06.509 [2024-11-20 11:23:51.985676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.509 [2024-11-20 11:23:51.985746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.509 2024/11/20 11:23:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:06.509 [2024-11-20 11:23:52.003391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.509 [2024-11-20 11:23:52.003460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.509 2024/11/20 11:23:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:06.509 [2024-11-20 11:23:52.019048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.509 [2024-11-20 11:23:52.019109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.509 2024/11/20 11:23:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:06.509 [2024-11-20 11:23:52.028545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.509 [2024-11-20 11:23:52.028625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.509 2024/11/20 11:23:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:06.509 [2024-11-20 11:23:52.044042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.509 [2024-11-20 11:23:52.044117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.509 2024/11/20 11:23:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:06.509 [2024-11-20 11:23:52.060377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.509 [2024-11-20 11:23:52.060442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.509 2024/11/20 11:23:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:06.509 [2024-11-20 11:23:52.077795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.509 [2024-11-20 11:23:52.077863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.509 2024/11/20 11:23:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:06.509 [2024-11-20 11:23:52.093232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.509 [2024-11-20 11:23:52.093304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.509 2024/11/20 11:23:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:06.768 [2024-11-20 11:23:52.104639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.768 [2024-11-20 11:23:52.104702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.768 2024/11/20 11:23:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:06.768 [2024-11-20 11:23:52.120833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.768 [2024-11-20 11:23:52.120902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.768 2024/11/20 11:23:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:06.768 [2024-11-20 11:23:52.136107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.768 [2024-11-20 11:23:52.136187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.768 2024/11/20 11:23:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:06.768 11085.50 IOPS, 86.61 MiB/s [2024-11-20T11:23:52.357Z] [2024-11-20 11:23:52.146225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.768 [2024-11-20 11:23:52.146286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.768 2024/11/20 11:23:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:06.768 [2024-11-20 11:23:52.161524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.768 [2024-11-20 11:23:52.161597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.768 2024/11/20 11:23:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:06.768 [2024-11-20 11:23:52.178736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.768 [2024-11-20 11:23:52.178811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.768 2024/11/20 11:23:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:06.768 [2024-11-20 11:23:52.194353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.768 [2024-11-20 11:23:52.194425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.768 2024/11/20 11:23:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:06.768 [2024-11-20 11:23:52.204699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.768 [2024-11-20 11:23:52.204762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.768 2024/11/20 11:23:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:06.768 [2024-11-20 11:23:52.219709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.768 [2024-11-20 11:23:52.219778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.768 2024/11/20 11:23:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:06.768 [2024-11-20 11:23:52.237188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.768 [2024-11-20 11:23:52.237264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.768 2024/11/20 11:23:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:06.768 [2024-11-20 11:23:52.254432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.768 [2024-11-20 11:23:52.254506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.768 2024/11/20 11:23:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:06.768 [2024-11-20 11:23:52.271289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.768 [2024-11-20 11:23:52.271379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.768 2024/11/20 11:23:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:06.768 [2024-11-20 11:23:52.288519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.768 [2024-11-20 11:23:52.288596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.768 2024/11/20 11:23:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:06.768 [2024-11-20 11:23:52.304832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.768 [2024-11-20 11:23:52.304898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.768 2024/11/20 11:23:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:06.768 [2024-11-20 11:23:52.321991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.768 [2024-11-20 11:23:52.322063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.768 2024/11/20 11:23:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:06.768 [2024-11-20 11:23:52.339073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.768 [2024-11-20 11:23:52.339145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.768 2024/11/20 11:23:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:07.027 [2024-11-20 11:23:52.355526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.027 [2024-11-20 11:23:52.355580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.027 2024/11/20 11:23:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:07.027 [2024-11-20 11:23:52.365530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.027 [2024-11-20 11:23:52.365573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.027 2024/11/20 11:23:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:07.027 [2024-11-20 11:23:52.380898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.027 [2024-11-20 11:23:52.380953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.027 2024/11/20 11:23:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:07.027 [2024-11-20 11:23:52.391592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.027 [2024-11-20 11:23:52.391649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.027 2024/11/20 11:23:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:07.027 [2024-11-20 11:23:52.403014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.027 [2024-11-20 11:23:52.403059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.028 2024/11/20 11:23:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:07.028 [2024-11-20 11:23:52.419026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.028 [2024-11-20 11:23:52.419084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.028 2024/11/20 11:23:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:07.028 [2024-11-20 11:23:52.436165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.028 [2024-11-20 11:23:52.436223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.028 2024/11/20 11:23:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:07.028 [2024-11-20 11:23:52.451237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.028 [2024-11-20 11:23:52.451289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.028 2024/11/20 11:23:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:07.028 [2024-11-20 11:23:52.468115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.028 [2024-11-20 11:23:52.468171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.028 2024/11/20 11:23:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:07.028 [2024-11-20 11:23:52.484373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.028 [2024-11-20 11:23:52.484421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.028 2024/11/20 11:23:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:07.028 [2024-11-20 11:23:52.504578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.028 [2024-11-20 11:23:52.504669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.028 2024/11/20 11:23:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:07.028 [2024-11-20 11:23:52.521308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.028 [2024-11-20 11:23:52.521387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.028 2024/11/20 11:23:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:07.028 [2024-11-20 11:23:52.538303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.028 [2024-11-20 11:23:52.538369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.028 2024/11/20 11:23:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:07.028 [2024-11-20 11:23:52.554634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.028 [2024-11-20 11:23:52.554693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.028 2024/11/20 11:23:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:07.028 [2024-11-20 11:23:52.570949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.028 [2024-11-20 11:23:52.571016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.028 2024/11/20 11:23:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:07.028 [2024-11-20 11:23:52.587167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.028 [2024-11-20 11:23:52.587221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.028 2024/11/20 11:23:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:07.028 [2024-11-20 11:23:52.603964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.028 [2024-11-20 11:23:52.604027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.028 2024/11/20 11:23:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:07.287 [2024-11-20 11:23:52.620598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.287 [2024-11-20 11:23:52.620693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.287 2024/11/20 11:23:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:07.287 [2024-11-20 11:23:52.637355] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.287 [2024-11-20 11:23:52.637437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.287 2024/11/20 11:23:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:07.287 [2024-11-20 11:23:52.653723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.287 [2024-11-20 11:23:52.653789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.287 2024/11/20 11:23:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:07.287 [2024-11-20 11:23:52.670997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.287 [2024-11-20 11:23:52.671059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.287 2024/11/20 11:23:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:07.287 [2024-11-20 11:23:52.686312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.287 [2024-11-20 11:23:52.686363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.287 2024/11/20 11:23:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:07.287 [2024-11-20 11:23:52.696026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.287 [2024-11-20 11:23:52.696194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.287 2024/11/20 11:23:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:07.287 [2024-11-20 11:23:52.710699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.287 [2024-11-20 11:23:52.710746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.287 2024/11/20 11:23:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:07.287 [2024-11-20 11:23:52.726350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.287 [2024-11-20 11:23:52.726570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.287 2024/11/20 11:23:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:07.287 [2024-11-20 11:23:52.736552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.287 [2024-11-20 11:23:52.736597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.287 2024/11/20 11:23:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:07.288 [2024-11-20 11:23:52.751421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.288 [2024-11-20 11:23:52.751482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.288 2024/11/20 11:23:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:07.288 [2024-11-20 11:23:52.768260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.288 [2024-11-20 11:23:52.768314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.288 2024/11/20 11:23:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:07.288 [2024-11-20 11:23:52.785223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.288 [2024-11-20 11:23:52.785279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.288 2024/11/20 11:23:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:07.288 [2024-11-20 11:23:52.801417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.288 [2024-11-20 11:23:52.801485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.288 2024/11/20 11:23:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:07.288 [2024-11-20 11:23:52.819497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.288 [2024-11-20 11:23:52.819549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.288 2024/11/20 11:23:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:07.288 [2024-11-20 11:23:52.834506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.288 [2024-11-20 11:23:52.834720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.288 2024/11/20 11:23:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:07.288 [2024-11-20 11:23:52.845107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.288 [2024-11-20 11:23:52.845152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.288 2024/11/20 11:23:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:07.288 [2024-11-20 11:23:52.860878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.288 [2024-11-20 11:23:52.860948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.288 2024/11/20 11:23:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:07.547 [2024-11-20 11:23:52.877255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.547 [2024-11-20 11:23:52.877362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.547 2024/11/20 11:23:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:07.547 [2024-11-20 11:23:52.892852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.547 [2024-11-20 11:23:52.893122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.547 2024/11/20 11:23:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:07.547 [2024-11-20 11:23:52.908988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.547 [2024-11-20 11:23:52.909041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.547 2024/11/20 11:23:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:07.547 [2024-11-20 11:23:52.919046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.547 [2024-11-20 11:23:52.919095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.547 2024/11/20 11:23:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:07.547 [2024-11-20 11:23:52.934545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.547 [2024-11-20 11:23:52.934772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.547 2024/11/20 11:23:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:07.547 [2024-11-20 11:23:52.951658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.547 [2024-11-20 11:23:52.951707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.547 2024/11/20 11:23:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:07.547 [2024-11-20 11:23:52.968752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.547 [2024-11-20 11:23:52.968966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.547 2024/11/20 11:23:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:07.547 [2024-11-20 11:23:52.984322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.547 [2024-11-20 11:23:52.984378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.547 2024/11/20 11:23:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:07.547 [2024-11-20 11:23:53.000229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.547 [2024-11-20 11:23:53.000290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.547 2024/11/20 11:23:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:07.547 [2024-11-20 11:23:53.011295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.547 [2024-11-20 11:23:53.011354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.547 2024/11/20 11:23:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:07.547 [2024-11-20 11:23:53.027278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.547 [2024-11-20 11:23:53.027343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.547 2024/11/20 11:23:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:07.547 [2024-11-20 11:23:53.042917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.547 [2024-11-20 11:23:53.042997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.547 2024/11/20 11:23:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:07.547 [2024-11-20 11:23:53.058981] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.547 [2024-11-20 11:23:53.059042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.547 2024/11/20 11:23:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:07.547 [2024-11-20 11:23:53.070630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.547 [2024-11-20 11:23:53.070702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.547 2024/11/20 11:23:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:07.547 [2024-11-20 11:23:53.085571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.547 [2024-11-20 11:23:53.085644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.547 2024/11/20 11:23:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:07.548 [2024-11-20 11:23:53.100961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.548 [2024-11-20 11:23:53.101013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.548 2024/11/20 11:23:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:07.548 [2024-11-20 11:23:53.112760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.548 [2024-11-20 11:23:53.112808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.548 2024/11/20 11:23:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:07.548 [2024-11-20 11:23:53.123944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.548 [2024-11-20 11:23:53.123991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.548 2024/11/20 11:23:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:07.807 [2024-11-20 11:23:53.136188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.807 [2024-11-20 11:23:53.136237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.807 11030.33 IOPS, 86.17 MiB/s [2024-11-20T11:23:53.396Z] 2024/11/20 11:23:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:07.807 [2024-11-20 11:23:53.152095] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.807 [2024-11-20 11:23:53.152146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.807 2024/11/20 11:23:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:07.807 [2024-11-20 11:23:53.167977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.807 [2024-11-20 11:23:53.168035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.807 2024/11/20 11:23:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:07.807 [2024-11-20 11:23:53.179241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.807 [2024-11-20 11:23:53.179310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.807 2024/11/20 11:23:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:07.807 [2024-11-20 11:23:53.194359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.807 [2024-11-20 11:23:53.194694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.807 2024/11/20 11:23:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:07.807 [2024-11-20 11:23:53.210233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.807 [2024-11-20 11:23:53.210307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.807 2024/11/20 11:23:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:07.807 [2024-11-20 11:23:53.226067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.807 [2024-11-20 11:23:53.226139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.807 2024/11/20 11:23:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:07.807 [2024-11-20 11:23:53.243371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.807 [2024-11-20 11:23:53.243444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.807 2024/11/20 11:23:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:07.807 [2024-11-20 11:23:53.260027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.807 [2024-11-20 11:23:53.260105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.807 2024/11/20 11:23:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:07.807 [2024-11-20 11:23:53.277252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.807 [2024-11-20 11:23:53.277325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.807 2024/11/20 11:23:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:07.807 [2024-11-20 11:23:53.293029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.807 [2024-11-20 11:23:53.293094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.807 2024/11/20 11:23:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:07.807 [2024-11-20 11:23:53.309424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.807 [2024-11-20 11:23:53.309492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.807 2024/11/20 11:23:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:07.807 [2024-11-20 11:23:53.326570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.807 [2024-11-20 11:23:53.326654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.807 2024/11/20 11:23:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:07.807 [2024-11-20 11:23:53.342035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.807 [2024-11-20 11:23:53.342106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.807 2024/11/20 11:23:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:07.808 [2024-11-20 11:23:53.358235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.808 [2024-11-20 11:23:53.358293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.808 2024/11/20 11:23:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:07.808 [2024-11-20 11:23:53.369653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.808 [2024-11-20 11:23:53.369721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.808 2024/11/20 11:23:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:07.808 [2024-11-20 11:23:53.384318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.808 [2024-11-20 11:23:53.384389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.808 2024/11/20 11:23:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:08.067 [2024-11-20 11:23:53.400338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.067 [2024-11-20 11:23:53.400395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.067 2024/11/20 11:23:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:08.067 [2024-11-20 11:23:53.410873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.067 [2024-11-20 11:23:53.410942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.067 2024/11/20 11:23:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:08.067 [2024-11-20 11:23:53.426164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.067 [2024-11-20 11:23:53.426251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.067 2024/11/20 11:23:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:08.067 [2024-11-20 11:23:53.438513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.067 [2024-11-20 11:23:53.438579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.067 2024/11/20 11:23:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:08.067 [2024-11-20 11:23:53.453729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.067 [2024-11-20 11:23:53.453794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.067 2024/11/20 11:23:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:08.067 [2024-11-20 11:23:53.471489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.067 [2024-11-20 11:23:53.471556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.067 2024/11/20 11:23:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:08.067 [2024-11-20 11:23:53.487830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.067 [2024-11-20 11:23:53.487899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.067 2024/11/20 11:23:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:08.067 [2024-11-20 11:23:53.504174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.067 [2024-11-20 11:23:53.504237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.067 2024/11/20 11:23:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:08.067 [2024-11-20 11:23:53.520263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.067 [2024-11-20 11:23:53.520331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.067 2024/11/20 11:23:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:08.067 [2024-11-20 11:23:53.530036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.067 [2024-11-20 11:23:53.530094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.067 2024/11/20 11:23:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:08.067 [2024-11-20 11:23:53.545038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.067 [2024-11-20 11:23:53.545114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.067 2024/11/20 11:23:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:08.067 [2024-11-20 11:23:53.555476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.067 [2024-11-20 11:23:53.555535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.067 2024/11/20 11:23:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:08.067 [2024-11-20 11:23:53.570202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.067 [2024-11-20 11:23:53.570264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.067 2024/11/20 11:23:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:08.067 [2024-11-20 11:23:53.587028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.067 [2024-11-20 11:23:53.587093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.067 2024/11/20 11:23:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:08.067 [2024-11-20 11:23:53.603665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.067 [2024-11-20 11:23:53.603727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.067 2024/11/20 11:23:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:08.067 [2024-11-20 11:23:53.621101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.067 [2024-11-20 11:23:53.621156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.067 2024/11/20 11:23:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:08.067 [2024-11-20 11:23:53.637083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.067 [2024-11-20 11:23:53.637149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.067 2024/11/20 11:23:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:08.326 [2024-11-20 11:23:53.655513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.326 [2024-11-20 11:23:53.655580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.326 2024/11/20 11:23:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:08.326 [2024-11-20 11:23:53.670862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.326 [2024-11-20 11:23:53.670925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.326 2024/11/20 11:23:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:08.326 [2024-11-20 11:23:53.689329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.326 [2024-11-20 11:23:53.689415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.326 2024/11/20 11:23:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:08.326 [2024-11-20 11:23:53.705610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.326 [2024-11-20 11:23:53.705691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.326 2024/11/20 11:23:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:08.326 [2024-11-20 11:23:53.722694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.326 [2024-11-20 11:23:53.722761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.326 2024/11/20 11:23:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:08.326 [2024-11-20 11:23:53.739163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.326 [2024-11-20 11:23:53.739235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.326 2024/11/20 11:23:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:08.326 [2024-11-20 11:23:53.755069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.326 [2024-11-20 11:23:53.755136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.326 2024/11/20 11:23:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:08.326 [2024-11-20 11:23:53.773929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.326 [2024-11-20 11:23:53.773996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.326 2024/11/20 11:23:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:08.326 [2024-11-20 11:23:53.789190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.326 [2024-11-20 11:23:53.789261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.326 2024/11/20 11:23:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:08.326 [2024-11-20 11:23:53.806708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.326 [2024-11-20 11:23:53.806778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.326 2024/11/20 11:23:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:08.326 [2024-11-20 11:23:53.821821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.326 [2024-11-20 11:23:53.821885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.326 2024/11/20 11:23:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:08.327 [2024-11-20 11:23:53.837938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.327 [2024-11-20 11:23:53.838001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.327 2024/11/20 11:23:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:08.327 [2024-11-20 11:23:53.855915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.327 [2024-11-20 11:23:53.855982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.327 2024/11/20 11:23:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:08.327 [2024-11-20 11:23:53.872333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.327 [2024-11-20 11:23:53.872399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.327 2024/11/20 11:23:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:08.327 [2024-11-20 11:23:53.887992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.327 [2024-11-20 11:23:53.888059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.327 2024/11/20 11:23:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:08.327 [2024-11-20 11:23:53.904677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.327 [2024-11-20 11:23:53.904740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.327 2024/11/20 11:23:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:08.586 [2024-11-20 11:23:53.920477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.586 [2024-11-20 11:23:53.920533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.586 2024/11/20 11:23:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:08.586 [2024-11-20 11:23:53.936934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.586 [2024-11-20 11:23:53.936999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.586 2024/11/20 11:23:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:08.586 [2024-11-20 11:23:53.953366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.586 [2024-11-20 11:23:53.953432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.586 2024/11/20 11:23:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:08.586 [2024-11-20 11:23:53.970704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.586 [2024-11-20 11:23:53.970770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.586 2024/11/20 11:23:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:08.586 [2024-11-20 11:23:53.985968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.586 [2024-11-20 11:23:53.986038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.586 2024/11/20 11:23:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:08.586 [2024-11-20 11:23:53.996362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.586 [2024-11-20 11:23:53.996430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.586 2024/11/20 11:23:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:08.586 [2024-11-20 11:23:54.011812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.586 [2024-11-20 11:23:54.011882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.586 2024/11/20 11:23:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:08.586 [2024-11-20 11:23:54.027319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.586 [2024-11-20 11:23:54.027403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.587 2024/11/20 11:23:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:08.587 [2024-11-20 11:23:54.037766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.587 [2024-11-20 11:23:54.037833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.587 2024/11/20 11:23:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:08.587 [2024-11-20 11:23:54.052852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.587 [2024-11-20 11:23:54.053117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.587 2024/11/20 11:23:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:08.587 [2024-11-20 11:23:54.068645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.587 [2024-11-20 11:23:54.068717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.587 2024/11/20 11:23:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:08.587 [2024-11-20 11:23:54.084298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.587 [2024-11-20 11:23:54.084368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.587 2024/11/20 11:23:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:08.587 [2024-11-20 11:23:54.101483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.587 [2024-11-20 11:23:54.101566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.587 2024/11/20 11:23:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:08.587 [2024-11-20 11:23:54.117009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.587 [2024-11-20 11:23:54.117083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.587 2024/11/20 11:23:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:08.587 [2024-11-20 11:23:54.132961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.587 [2024-11-20 11:23:54.133034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.587 2024/11/20 11:23:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:08.587 10964.00 IOPS, 85.66 MiB/s [2024-11-20T11:23:54.176Z] [2024-11-20 11:23:54.143452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.587 [2024-11-20 11:23:54.143710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.587 2024/11/20 11:23:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:08.587 [2024-11-20 11:23:54.159434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.587 [2024-11-20 11:23:54.159704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.587 2024/11/20 11:23:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:08.846 [2024-11-20 11:23:54.176595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.846 [2024-11-20 11:23:54.176680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.846 2024/11/20 11:23:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:08.846 [2024-11-20 11:23:54.193214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.846 [2024-11-20 11:23:54.193289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.846 2024/11/20 11:23:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:08.846 [2024-11-20 11:23:54.210152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.846 [2024-11-20 11:23:54.210459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.846 2024/11/20 11:23:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:08.846 [2024-11-20 11:23:54.226475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.846 [2024-11-20 11:23:54.226543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.846 2024/11/20 11:23:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:08.846 [2024-11-20 11:23:54.242664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.846 [2024-11-20 11:23:54.242728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.846 2024/11/20 11:23:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:08.846 [2024-11-20 11:23:54.253031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.846 [2024-11-20 11:23:54.253349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.846 2024/11/20 11:23:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:08.846 [2024-11-20 11:23:54.268979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.846 [2024-11-20 11:23:54.269054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.846 2024/11/20 11:23:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:08.846 [2024-11-20 11:23:54.286745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.846 [2024-11-20 11:23:54.286824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.846 2024/11/20 11:23:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:08.846 [2024-11-20 11:23:54.301791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.846 [2024-11-20 11:23:54.301861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.846 2024/11/20 11:23:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:08.846 [2024-11-20 11:23:54.320339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.846 [2024-11-20 11:23:54.320638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.846 2024/11/20 11:23:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:08.846 [2024-11-20 11:23:54.335609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.847 [2024-11-20 11:23:54.335676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.847 2024/11/20 11:23:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:08.847 [2024-11-20 11:23:54.351485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.847 [2024-11-20 11:23:54.351546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.847 2024/11/20 11:23:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:08.847 [2024-11-20 11:23:54.370161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.847 [2024-11-20 11:23:54.370230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.847 2024/11/20 11:23:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:08.847 [2024-11-20 11:23:54.385506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.847 [2024-11-20 11:23:54.385582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.847 2024/11/20 11:23:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:08.847 [2024-11-20 11:23:54.406243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.847 [2024-11-20 11:23:54.406339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.847 2024/11/20 11:23:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:08.847 [2024-11-20 11:23:54.422025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.847 [2024-11-20 11:23:54.422115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.847 2024/11/20 11:23:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:09.106 [2024-11-20 11:23:54.439495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.106 [2024-11-20 11:23:54.439593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.106 2024/11/20 11:23:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:09.106 [2024-11-20 11:23:54.457209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.106 [2024-11-20 11:23:54.457298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.106 2024/11/20 11:23:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:09.106 [2024-11-20 11:23:54.471992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.106 [2024-11-20 11:23:54.472061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.106 2024/11/20 11:23:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:09.106 [2024-11-20 11:23:54.489419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.106 [2024-11-20 11:23:54.489484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.106 2024/11/20 11:23:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:09.106 [2024-11-20 11:23:54.506205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.106 [2024-11-20 11:23:54.506272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.106 2024/11/20 11:23:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:09.106 [2024-11-20 11:23:54.522478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.106 [2024-11-20 11:23:54.522542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.106 2024/11/20 11:23:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:09.106 [2024-11-20 11:23:54.538735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.106 [2024-11-20 11:23:54.538797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.106 2024/11/20 11:23:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:09.106 [2024-11-20 11:23:54.551172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.106 [2024-11-20 11:23:54.551234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.106 2024/11/20 11:23:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:09.106 [2024-11-20 11:23:54.569021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.106 [2024-11-20 11:23:54.569086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.106 2024/11/20 11:23:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:09.106 [2024-11-20 11:23:54.584224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.106 [2024-11-20 11:23:54.584289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.106 2024/11/20 11:23:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:09.106 [2024-11-20 11:23:54.601272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.106 [2024-11-20 11:23:54.601348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.106 2024/11/20 11:23:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:09.106 [2024-11-20 11:23:54.619558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.106 [2024-11-20 11:23:54.619638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.106 2024/11/20 11:23:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:09.106 [2024-11-20 11:23:54.635042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.106 [2024-11-20 11:23:54.635113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.106 2024/11/20 11:23:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:09.106 [2024-11-20 11:23:54.651306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.106 [2024-11-20 11:23:54.651376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.107 2024/11/20 11:23:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:09.107 [2024-11-20 11:23:54.661684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.107 [2024-11-20 11:23:54.661743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.107 2024/11/20 11:23:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:09.107 [2024-11-20 11:23:54.676315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.107 [2024-11-20 11:23:54.676391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.107 2024/11/20 11:23:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:09.366 [2024-11-20 11:23:54.693226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.366 [2024-11-20 11:23:54.693499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.366 2024/11/20 11:23:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:09.366 [2024-11-20 11:23:54.709334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.366 [2024-11-20 11:23:54.709586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.366 2024/11/20 11:23:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:09.366 [2024-11-20 11:23:54.719707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.366 [2024-11-20 11:23:54.719921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.366 2024/11/20 11:23:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:09.366 [2024-11-20 11:23:54.734705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.366 [2024-11-20 11:23:54.734897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.366 2024/11/20 11:23:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:09.366 [2024-11-20 11:23:54.751547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.366 [2024-11-20 11:23:54.751768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.366 2024/11/20 11:23:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:09.366 [2024-11-20 11:23:54.767038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.366 [2024-11-20 11:23:54.767234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.366 2024/11/20 11:23:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:09.366 [2024-11-20 11:23:54.784383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.366 [2024-11-20 11:23:54.784572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.366 2024/11/20 11:23:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:09.366 [2024-11-20 11:23:54.799795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.366 [2024-11-20 11:23:54.799856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.366 2024/11/20 11:23:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:09.366 [2024-11-20 11:23:54.817723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.366 [2024-11-20 11:23:54.817785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.366 2024/11/20 11:23:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:09.366 [2024-11-20 11:23:54.832602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.366 [2024-11-20 11:23:54.832671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.366 2024/11/20 11:23:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:09.366 [2024-11-20 11:23:54.842996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.366 [2024-11-20 11:23:54.843057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.366 2024/11/20 11:23:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:09.366 [2024-11-20 11:23:54.857656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.366 [2024-11-20 11:23:54.857715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.366 2024/11/20 11:23:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:09.366 [2024-11-20 11:23:54.874809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.366 [2024-11-20 11:23:54.874876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.366 2024/11/20 11:23:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:09.366 [2024-11-20 11:23:54.890629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.366 [2024-11-20 11:23:54.890691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.366 2024/11/20 11:23:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:09.366 [2024-11-20 11:23:54.901132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.366 [2024-11-20 11:23:54.901187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.366 2024/11/20 11:23:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:09.366 [2024-11-20 11:23:54.916778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.366 [2024-11-20 11:23:54.916840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.366 2024/11/20 11:23:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:09.366 [2024-11-20 11:23:54.932811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.366 [2024-11-20 11:23:54.932860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.366 2024/11/20 11:23:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:09.366 [2024-11-20 11:23:54.950627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.366 [2024-11-20 11:23:54.950680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.626 2024/11/20 11:23:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:09.626 [2024-11-20 11:23:54.965699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.626 [2024-11-20 11:23:54.965766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.626 2024/11/20 11:23:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:09.626 [2024-11-20 11:23:54.983460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.626 [2024-11-20 11:23:54.983521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.626 2024/11/20 11:23:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:09.626 [2024-11-20 11:23:54.998702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.626 [2024-11-20 11:23:54.998762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.626 2024/11/20 11:23:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:09.626 [2024-11-20 11:23:55.008939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.626 [2024-11-20 11:23:55.008999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.627 2024/11/20 11:23:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:09.627 [2024-11-20 11:23:55.023408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.627 [2024-11-20 11:23:55.023456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.627 2024/11/20 11:23:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:09.627 [2024-11-20 11:23:55.040726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.627 [2024-11-20 11:23:55.040764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.627 2024/11/20 11:23:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:09.627 [2024-11-20 11:23:55.058068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.627 [2024-11-20 11:23:55.058108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.627 2024/11/20 11:23:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:09.627 [2024-11-20 11:23:55.073845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.627 [2024-11-20 11:23:55.073886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.627 2024/11/20 11:23:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:09.627 [2024-11-20 11:23:55.084416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.627 [2024-11-20 11:23:55.084454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.627 2024/11/20 11:23:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:09.627 [2024-11-20 11:23:55.099489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.627 [2024-11-20 11:23:55.099673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.627 2024/11/20 11:23:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:09.627 [2024-11-20 11:23:55.116471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.627 [2024-11-20 11:23:55.116509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.627 2024/11/20 11:23:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:09.627 [2024-11-20 11:23:55.126085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.627 [2024-11-20 11:23:55.126122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.627 2024/11/20 11:23:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:09.627 10949.20 IOPS, 85.54 MiB/s [2024-11-20T11:23:55.216Z] [2024-11-20 11:23:55.140729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.627 [2024-11-20 11:23:55.140786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.627 2024/11/20 11:23:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:09.627 00:11:09.627 Latency(us) 00:11:09.627 [2024-11-20T11:23:55.216Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:09.627 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:11:09.627 Nvme1n1 : 5.01 10951.72 85.56 0.00 0.00 11672.66 5153.51 25141.99 00:11:09.627 [2024-11-20T11:23:55.216Z] =================================================================================================================== 00:11:09.627 [2024-11-20T11:23:55.216Z] Total : 10951.72 85.56 0.00 0.00 11672.66 5153.51 25141.99 00:11:09.627 [2024-11-20 11:23:55.152725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.627 [2024-11-20 11:23:55.152775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.627 2024/11/20 11:23:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:09.627 [2024-11-20 11:23:55.164728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.627 [2024-11-20 11:23:55.164779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.627 2024/11/20 11:23:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:09.627 [2024-11-20 11:23:55.176766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.627 [2024-11-20 11:23:55.176826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.627 2024/11/20 11:23:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:09.627 [2024-11-20 11:23:55.188755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.627 [2024-11-20 11:23:55.188814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.627 2024/11/20 11:23:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:09.627 [2024-11-20 11:23:55.200756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.627 [2024-11-20 11:23:55.200813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.627 2024/11/20 11:23:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:09.887 [2024-11-20 11:23:55.212757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.887 [2024-11-20 11:23:55.212806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.887 2024/11/20 11:23:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:09.887 [2024-11-20 11:23:55.224766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.887 [2024-11-20 11:23:55.224823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.887 2024/11/20 11:23:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:09.887 [2024-11-20 11:23:55.236739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.887 [2024-11-20 11:23:55.236781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.887 2024/11/20 11:23:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:09.887 [2024-11-20 11:23:55.248754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.887 [2024-11-20 11:23:55.248801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.887 2024/11/20 11:23:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:09.887 [2024-11-20 11:23:55.260800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.887 [2024-11-20 11:23:55.260860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.887 2024/11/20 11:23:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:09.887 [2024-11-20 11:23:55.272761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.887 [2024-11-20 11:23:55.272799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.887 2024/11/20 11:23:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:09.887 [2024-11-20 11:23:55.284782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.887 [2024-11-20 11:23:55.284827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.887 2024/11/20 11:23:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:09.887 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (68799) - No such process 00:11:09.887 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 68799 00:11:09.887 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:09.887 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.887 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:09.887 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.887 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:09.887 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.887 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:09.887 delay0 00:11:09.887 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.887 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:11:09.887 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.887 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:09.887 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.887 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:11:10.146 [2024-11-20 11:23:55.492709] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:16.717 Initializing NVMe Controllers 00:11:16.717 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:11:16.717 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:16.717 Initialization complete. Launching workers. 00:11:16.717 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 101 00:11:16.717 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 388, failed to submit 33 00:11:16.717 success 213, unsuccessful 175, failed 0 00:11:16.717 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:11:16.717 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:11:16.717 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:16.717 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:11:16.717 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:16.717 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:11:16.717 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:16.717 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:16.717 rmmod nvme_tcp 00:11:16.717 rmmod nvme_fabrics 00:11:16.717 rmmod nvme_keyring 00:11:16.717 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:16.717 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:11:16.717 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:11:16.717 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 68631 ']' 00:11:16.717 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 68631 00:11:16.717 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 68631 ']' 00:11:16.717 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 68631 00:11:16.717 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:11:16.717 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:16.717 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68631 00:11:16.717 killing process with pid 68631 00:11:16.717 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:16.717 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:16.717 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68631' 00:11:16.717 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 68631 00:11:16.717 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 68631 00:11:16.717 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:16.717 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:16.717 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:16.717 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:11:16.717 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:11:16.717 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:16.717 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:11:16.717 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:16.717 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:16.717 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:16.717 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:16.717 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:16.717 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:16.717 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:16.718 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:16.718 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:16.718 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:16.718 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:16.718 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:16.718 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:16.718 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:16.718 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:16.718 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:16.718 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:16.718 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:16.718 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:16.718 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:11:16.718 00:11:16.718 real 0m24.570s 00:11:16.718 user 0m39.353s 00:11:16.718 sys 0m6.440s 00:11:16.718 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:16.718 ************************************ 00:11:16.718 END TEST nvmf_zcopy 00:11:16.718 ************************************ 00:11:16.718 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:16.718 11:24:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:16.718 11:24:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:16.718 11:24:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:16.718 11:24:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:16.718 ************************************ 00:11:16.718 START TEST nvmf_nmic 00:11:16.718 ************************************ 00:11:16.718 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:16.718 * Looking for test storage... 00:11:16.718 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:16.718 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:16.718 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:11:16.718 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:16.718 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:16.718 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:16.718 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:16.718 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:16.718 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:11:16.718 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:11:16.718 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:11:16.718 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:11:16.718 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:11:16.718 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:11:16.718 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:11:16.718 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:16.718 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:11:16.718 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:11:16.718 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:16.718 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:16.718 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:11:16.718 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:11:16.718 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:16.718 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:11:16.718 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:11:16.718 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:11:16.718 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:11:16.718 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:16.718 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:11:16.718 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:11:16.718 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:16.718 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:16.718 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:11:16.718 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:16.718 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:16.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.718 --rc genhtml_branch_coverage=1 00:11:16.718 --rc genhtml_function_coverage=1 00:11:16.718 --rc genhtml_legend=1 00:11:16.718 --rc geninfo_all_blocks=1 00:11:16.718 --rc geninfo_unexecuted_blocks=1 00:11:16.718 00:11:16.718 ' 00:11:16.718 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:16.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.718 --rc genhtml_branch_coverage=1 00:11:16.718 --rc genhtml_function_coverage=1 00:11:16.718 --rc genhtml_legend=1 00:11:16.718 --rc geninfo_all_blocks=1 00:11:16.718 --rc geninfo_unexecuted_blocks=1 00:11:16.718 00:11:16.718 ' 00:11:16.718 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:16.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.718 --rc genhtml_branch_coverage=1 00:11:16.718 --rc genhtml_function_coverage=1 00:11:16.718 --rc genhtml_legend=1 00:11:16.718 --rc geninfo_all_blocks=1 00:11:16.718 --rc geninfo_unexecuted_blocks=1 00:11:16.718 00:11:16.718 ' 00:11:16.718 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:16.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.718 --rc genhtml_branch_coverage=1 00:11:16.718 --rc genhtml_function_coverage=1 00:11:16.718 --rc genhtml_legend=1 00:11:16.718 --rc geninfo_all_blocks=1 00:11:16.718 --rc geninfo_unexecuted_blocks=1 00:11:16.718 00:11:16.718 ' 00:11:16.718 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:16.718 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:11:16.718 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:16.718 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:16.718 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:16.718 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:16.718 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:16.718 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:16.718 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:16.718 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:16.718 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:16.718 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:16.718 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:11:16.718 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=806d442c-08ba-4719-b76d-33169283459a 00:11:16.718 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:16.718 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:16.718 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:16.718 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:16.718 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:16.718 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:11:16.977 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:16.977 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:16.977 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:16.977 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.977 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:16.978 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:16.978 Cannot find device "nvmf_init_br" 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:16.978 Cannot find device "nvmf_init_br2" 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:16.978 Cannot find device "nvmf_tgt_br" 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:16.978 Cannot find device "nvmf_tgt_br2" 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:16.978 Cannot find device "nvmf_init_br" 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:16.978 Cannot find device "nvmf_init_br2" 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:16.978 Cannot find device "nvmf_tgt_br" 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:16.978 Cannot find device "nvmf_tgt_br2" 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:16.978 Cannot find device "nvmf_br" 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:16.978 Cannot find device "nvmf_init_if" 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:16.978 Cannot find device "nvmf_init_if2" 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:16.978 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:16.978 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:16.978 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:17.237 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:17.237 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:17.237 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:17.237 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:17.237 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:17.237 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:17.237 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:17.237 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:17.237 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:17.237 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:17.237 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:17.237 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:17.237 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:17.237 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:17.237 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:17.237 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:17.237 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.096 ms 00:11:17.237 00:11:17.237 --- 10.0.0.3 ping statistics --- 00:11:17.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:17.237 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:11:17.237 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:17.237 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:17.237 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:11:17.237 00:11:17.237 --- 10.0.0.4 ping statistics --- 00:11:17.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:17.237 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:11:17.237 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:17.237 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:17.237 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:11:17.237 00:11:17.237 --- 10.0.0.1 ping statistics --- 00:11:17.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:17.237 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:11:17.237 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:17.237 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:17.237 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:11:17.237 00:11:17.237 --- 10.0.0.2 ping statistics --- 00:11:17.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:17.237 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:11:17.237 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:17.237 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:11:17.237 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:17.237 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:17.237 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:17.237 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:17.237 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:17.237 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:17.237 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:17.237 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:11:17.237 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:17.237 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:17.237 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:17.237 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=69179 00:11:17.237 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:17.237 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 69179 00:11:17.237 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 69179 ']' 00:11:17.237 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:17.237 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:17.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:17.237 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:17.237 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:17.237 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:17.237 [2024-11-20 11:24:02.781578] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:11:17.237 [2024-11-20 11:24:02.781696] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:17.496 [2024-11-20 11:24:02.927074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:17.496 [2024-11-20 11:24:02.973501] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:17.496 [2024-11-20 11:24:02.973569] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:17.496 [2024-11-20 11:24:02.973590] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:17.496 [2024-11-20 11:24:02.973606] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:17.496 [2024-11-20 11:24:02.973648] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:17.496 [2024-11-20 11:24:02.974532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:17.496 [2024-11-20 11:24:02.974595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:17.496 [2024-11-20 11:24:02.974660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:17.496 [2024-11-20 11:24:02.974668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:17.496 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:17.496 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:11:17.496 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:17.496 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:17.496 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:17.754 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:17.754 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:17.754 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.754 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:17.754 [2024-11-20 11:24:03.116651] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:17.754 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.754 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:17.754 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.754 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:17.754 Malloc0 00:11:17.754 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.754 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:17.754 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.754 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:17.754 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.754 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:17.754 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.754 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:17.754 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.754 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:17.755 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.755 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:17.755 [2024-11-20 11:24:03.171765] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:17.755 test case1: single bdev can't be used in multiple subsystems 00:11:17.755 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.755 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:11:17.755 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:11:17.755 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.755 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:17.755 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.755 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:11:17.755 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.755 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:17.755 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.755 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:11:17.755 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:11:17.755 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.755 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:17.755 [2024-11-20 11:24:03.195565] bdev.c:8193:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:11:17.755 [2024-11-20 11:24:03.195634] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:11:17.755 [2024-11-20 11:24:03.195653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.755 2024/11/20 11:24:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.755 request: 00:11:17.755 { 00:11:17.755 "method": "nvmf_subsystem_add_ns", 00:11:17.755 "params": { 00:11:17.755 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:17.755 "namespace": { 00:11:17.755 "bdev_name": "Malloc0", 00:11:17.755 "no_auto_visible": false 00:11:17.755 } 00:11:17.755 } 00:11:17.755 } 00:11:17.755 Got JSON-RPC error response 00:11:17.755 GoRPCClient: error on JSON-RPC call 00:11:17.755 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:17.755 Adding namespace failed - expected result. 00:11:17.755 test case2: host connect to nvmf target in multiple paths 00:11:17.755 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:11:17.755 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:11:17.755 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:11:17.755 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:11:17.755 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:11:17.755 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.755 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:17.755 [2024-11-20 11:24:03.211783] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:11:17.755 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.755 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid=806d442c-08ba-4719-b76d-33169283459a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:11:18.013 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid=806d442c-08ba-4719-b76d-33169283459a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:11:18.013 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:11:18.013 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:11:18.013 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:18.013 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:18.013 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:11:20.543 11:24:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:20.543 11:24:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:20.543 11:24:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:20.543 11:24:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:20.543 11:24:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:20.543 11:24:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:11:20.543 11:24:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:20.543 [global] 00:11:20.543 thread=1 00:11:20.543 invalidate=1 00:11:20.543 rw=write 00:11:20.543 time_based=1 00:11:20.543 runtime=1 00:11:20.543 ioengine=libaio 00:11:20.543 direct=1 00:11:20.543 bs=4096 00:11:20.543 iodepth=1 00:11:20.544 norandommap=0 00:11:20.544 numjobs=1 00:11:20.544 00:11:20.544 verify_dump=1 00:11:20.544 verify_backlog=512 00:11:20.544 verify_state_save=0 00:11:20.544 do_verify=1 00:11:20.544 verify=crc32c-intel 00:11:20.544 [job0] 00:11:20.544 filename=/dev/nvme0n1 00:11:20.544 Could not set queue depth (nvme0n1) 00:11:20.544 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:20.544 fio-3.35 00:11:20.544 Starting 1 thread 00:11:21.477 00:11:21.478 job0: (groupid=0, jobs=1): err= 0: pid=69275: Wed Nov 20 11:24:06 2024 00:11:21.478 read: IOPS=3094, BW=12.1MiB/s (12.7MB/s)(12.1MiB/1001msec) 00:11:21.478 slat (nsec): min=13029, max=63711, avg=16030.96, stdev=4427.97 00:11:21.478 clat (usec): min=129, max=271, avg=150.52, stdev=13.32 00:11:21.478 lat (usec): min=142, max=286, avg=166.55, stdev=14.45 00:11:21.478 clat percentiles (usec): 00:11:21.478 | 1.00th=[ 135], 5.00th=[ 137], 10.00th=[ 139], 20.00th=[ 141], 00:11:21.478 | 30.00th=[ 143], 40.00th=[ 145], 50.00th=[ 149], 60.00th=[ 151], 00:11:21.478 | 70.00th=[ 155], 80.00th=[ 157], 90.00th=[ 165], 95.00th=[ 172], 00:11:21.478 | 99.00th=[ 208], 99.50th=[ 221], 99.90th=[ 243], 99.95th=[ 258], 00:11:21.478 | 99.99th=[ 273] 00:11:21.478 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:11:21.478 slat (usec): min=19, max=133, avg=22.88, stdev= 6.07 00:11:21.478 clat (usec): min=92, max=665, avg=108.92, stdev=13.39 00:11:21.478 lat (usec): min=112, max=687, avg=131.80, stdev=15.65 00:11:21.478 clat percentiles (usec): 00:11:21.478 | 1.00th=[ 95], 5.00th=[ 97], 10.00th=[ 99], 20.00th=[ 102], 00:11:21.478 | 30.00th=[ 104], 40.00th=[ 105], 50.00th=[ 108], 60.00th=[ 110], 00:11:21.478 | 70.00th=[ 112], 80.00th=[ 116], 90.00th=[ 122], 95.00th=[ 127], 00:11:21.478 | 99.00th=[ 139], 99.50th=[ 143], 99.90th=[ 159], 99.95th=[ 297], 00:11:21.478 | 99.99th=[ 668] 00:11:21.478 bw ( KiB/s): min=14496, max=14496, per=100.00%, avg=14496.00, stdev= 0.00, samples=1 00:11:21.478 iops : min= 3624, max= 3624, avg=3624.00, stdev= 0.00, samples=1 00:11:21.478 lat (usec) : 100=7.62%, 250=92.32%, 500=0.04%, 750=0.01% 00:11:21.478 cpu : usr=2.70%, sys=10.00%, ctx=6692, majf=0, minf=5 00:11:21.478 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:21.478 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:21.478 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:21.478 issued rwts: total=3098,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:21.478 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:21.478 00:11:21.478 Run status group 0 (all jobs): 00:11:21.478 READ: bw=12.1MiB/s (12.7MB/s), 12.1MiB/s-12.1MiB/s (12.7MB/s-12.7MB/s), io=12.1MiB (12.7MB), run=1001-1001msec 00:11:21.478 WRITE: bw=14.0MiB/s (14.7MB/s), 14.0MiB/s-14.0MiB/s (14.7MB/s-14.7MB/s), io=14.0MiB (14.7MB), run=1001-1001msec 00:11:21.478 00:11:21.478 Disk stats (read/write): 00:11:21.478 nvme0n1: ios=2957/3072, merge=0/0, ticks=480/359, in_queue=839, util=91.28% 00:11:21.478 11:24:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:21.478 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:21.478 11:24:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:21.478 11:24:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:11:21.478 11:24:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:21.478 11:24:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:21.478 11:24:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:21.478 11:24:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:21.478 11:24:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:11:21.478 11:24:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:11:21.478 11:24:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:11:21.478 11:24:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:21.478 11:24:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:11:21.478 11:24:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:21.478 11:24:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:11:21.478 11:24:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:21.478 11:24:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:21.478 rmmod nvme_tcp 00:11:21.478 rmmod nvme_fabrics 00:11:21.478 rmmod nvme_keyring 00:11:21.478 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:21.478 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:11:21.478 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:11:21.478 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 69179 ']' 00:11:21.478 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 69179 00:11:21.478 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 69179 ']' 00:11:21.478 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 69179 00:11:21.478 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:11:21.478 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:21.478 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69179 00:11:21.736 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:21.736 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:21.736 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69179' 00:11:21.736 killing process with pid 69179 00:11:21.736 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 69179 00:11:21.736 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 69179 00:11:21.736 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:21.736 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:21.736 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:21.736 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:11:21.736 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:11:21.736 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:21.736 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:11:21.736 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:21.736 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:21.736 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:21.736 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:21.736 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:21.736 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:21.736 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:21.736 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:21.736 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:21.736 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:21.736 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:21.994 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:21.994 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:21.994 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:21.994 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:21.994 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:21.994 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:21.994 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:21.994 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:21.994 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:11:21.994 00:11:21.994 real 0m5.374s 00:11:21.994 user 0m16.619s 00:11:21.994 sys 0m1.428s 00:11:21.994 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:21.994 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:21.994 ************************************ 00:11:21.994 END TEST nvmf_nmic 00:11:21.994 ************************************ 00:11:21.994 11:24:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:21.994 11:24:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:21.994 11:24:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:21.994 11:24:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:21.994 ************************************ 00:11:21.994 START TEST nvmf_fio_target 00:11:21.994 ************************************ 00:11:21.994 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:22.253 * Looking for test storage... 00:11:22.253 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:22.253 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:22.253 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:22.253 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:11:22.253 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:22.253 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:22.253 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:22.253 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:22.253 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:11:22.253 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:11:22.253 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:11:22.253 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:11:22.253 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:11:22.253 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:11:22.253 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:11:22.253 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:22.253 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:11:22.253 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:11:22.253 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:22.253 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:22.253 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:11:22.253 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:11:22.253 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:22.253 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:11:22.253 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:11:22.253 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:11:22.253 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:11:22.253 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:22.253 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:11:22.253 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:11:22.253 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:22.253 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:22.253 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:11:22.253 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:22.253 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:22.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.253 --rc genhtml_branch_coverage=1 00:11:22.253 --rc genhtml_function_coverage=1 00:11:22.253 --rc genhtml_legend=1 00:11:22.253 --rc geninfo_all_blocks=1 00:11:22.253 --rc geninfo_unexecuted_blocks=1 00:11:22.253 00:11:22.253 ' 00:11:22.253 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:22.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.253 --rc genhtml_branch_coverage=1 00:11:22.253 --rc genhtml_function_coverage=1 00:11:22.253 --rc genhtml_legend=1 00:11:22.253 --rc geninfo_all_blocks=1 00:11:22.253 --rc geninfo_unexecuted_blocks=1 00:11:22.253 00:11:22.253 ' 00:11:22.253 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:22.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.253 --rc genhtml_branch_coverage=1 00:11:22.253 --rc genhtml_function_coverage=1 00:11:22.253 --rc genhtml_legend=1 00:11:22.253 --rc geninfo_all_blocks=1 00:11:22.253 --rc geninfo_unexecuted_blocks=1 00:11:22.253 00:11:22.253 ' 00:11:22.253 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:22.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.253 --rc genhtml_branch_coverage=1 00:11:22.253 --rc genhtml_function_coverage=1 00:11:22.253 --rc genhtml_legend=1 00:11:22.253 --rc geninfo_all_blocks=1 00:11:22.253 --rc geninfo_unexecuted_blocks=1 00:11:22.253 00:11:22.253 ' 00:11:22.253 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:22.253 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:11:22.253 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:22.253 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:22.253 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=806d442c-08ba-4719-b76d-33169283459a 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:22.254 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:22.254 Cannot find device "nvmf_init_br" 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:22.254 Cannot find device "nvmf_init_br2" 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:22.254 Cannot find device "nvmf_tgt_br" 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:22.254 Cannot find device "nvmf_tgt_br2" 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:22.254 Cannot find device "nvmf_init_br" 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:22.254 Cannot find device "nvmf_init_br2" 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:22.254 Cannot find device "nvmf_tgt_br" 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:22.254 Cannot find device "nvmf_tgt_br2" 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:11:22.254 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:22.513 Cannot find device "nvmf_br" 00:11:22.513 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:11:22.513 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:22.513 Cannot find device "nvmf_init_if" 00:11:22.513 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:11:22.513 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:22.513 Cannot find device "nvmf_init_if2" 00:11:22.513 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:11:22.513 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:22.513 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:22.513 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:11:22.513 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:22.513 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:22.513 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:11:22.513 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:22.513 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:22.513 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:22.513 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:22.513 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:22.513 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:22.513 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:22.513 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:22.513 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:22.513 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:22.513 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:22.513 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:22.513 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:22.513 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:22.513 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:22.513 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:22.513 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:22.513 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:22.513 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:22.513 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:22.513 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:22.513 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:22.513 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:22.513 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:22.513 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:22.513 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:22.513 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:22.513 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:22.771 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:22.771 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:22.771 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:22.771 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:22.771 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:22.771 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:22.771 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:11:22.771 00:11:22.771 --- 10.0.0.3 ping statistics --- 00:11:22.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:22.771 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:11:22.771 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:22.771 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:22.771 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.033 ms 00:11:22.771 00:11:22.771 --- 10.0.0.4 ping statistics --- 00:11:22.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:22.771 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:11:22.771 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:22.771 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:22.771 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:11:22.771 00:11:22.771 --- 10.0.0.1 ping statistics --- 00:11:22.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:22.771 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:11:22.771 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:22.771 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:22.771 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:11:22.771 00:11:22.771 --- 10.0.0.2 ping statistics --- 00:11:22.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:22.771 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:11:22.771 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:22.771 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:11:22.771 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:22.771 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:22.771 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:22.771 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:22.772 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:22.772 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:22.772 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:22.772 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:11:22.772 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:22.772 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:22.772 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.772 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=69505 00:11:22.772 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:22.772 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 69505 00:11:22.772 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 69505 ']' 00:11:22.772 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:22.772 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:22.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:22.772 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:22.772 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:22.772 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.772 [2024-11-20 11:24:08.201378] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:11:22.772 [2024-11-20 11:24:08.201453] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:22.772 [2024-11-20 11:24:08.346311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:23.030 [2024-11-20 11:24:08.380978] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:23.030 [2024-11-20 11:24:08.381036] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:23.030 [2024-11-20 11:24:08.381048] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:23.030 [2024-11-20 11:24:08.381057] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:23.030 [2024-11-20 11:24:08.381064] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:23.030 [2024-11-20 11:24:08.381893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:23.030 [2024-11-20 11:24:08.381969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:23.030 [2024-11-20 11:24:08.382045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:23.030 [2024-11-20 11:24:08.382048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.030 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:23.030 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:11:23.030 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:23.030 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:23.030 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.030 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:23.030 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:23.288 [2024-11-20 11:24:08.870651] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:23.546 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:23.804 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:23.804 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:24.134 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:24.134 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:24.429 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:24.429 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:24.687 11:24:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:24.687 11:24:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:24.946 11:24:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:25.513 11:24:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:25.513 11:24:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:25.771 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:25.771 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:26.029 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:26.029 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:26.288 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:26.547 11:24:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:26.547 11:24:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:27.112 11:24:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:27.113 11:24:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:27.371 11:24:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:27.629 [2024-11-20 11:24:13.062390] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:27.629 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:27.888 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:28.454 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid=806d442c-08ba-4719-b76d-33169283459a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:11:28.454 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:28.454 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:11:28.454 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:28.454 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:11:28.454 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:11:28.454 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:11:30.357 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:30.357 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:30.357 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:30.357 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:11:30.357 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:30.357 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:11:30.357 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:30.615 [global] 00:11:30.615 thread=1 00:11:30.615 invalidate=1 00:11:30.615 rw=write 00:11:30.615 time_based=1 00:11:30.615 runtime=1 00:11:30.615 ioengine=libaio 00:11:30.615 direct=1 00:11:30.615 bs=4096 00:11:30.615 iodepth=1 00:11:30.615 norandommap=0 00:11:30.615 numjobs=1 00:11:30.615 00:11:30.615 verify_dump=1 00:11:30.615 verify_backlog=512 00:11:30.615 verify_state_save=0 00:11:30.615 do_verify=1 00:11:30.615 verify=crc32c-intel 00:11:30.615 [job0] 00:11:30.615 filename=/dev/nvme0n1 00:11:30.615 [job1] 00:11:30.615 filename=/dev/nvme0n2 00:11:30.615 [job2] 00:11:30.615 filename=/dev/nvme0n3 00:11:30.615 [job3] 00:11:30.615 filename=/dev/nvme0n4 00:11:30.615 Could not set queue depth (nvme0n1) 00:11:30.615 Could not set queue depth (nvme0n2) 00:11:30.615 Could not set queue depth (nvme0n3) 00:11:30.615 Could not set queue depth (nvme0n4) 00:11:30.615 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:30.615 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:30.615 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:30.615 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:30.615 fio-3.35 00:11:30.615 Starting 4 threads 00:11:32.015 00:11:32.015 job0: (groupid=0, jobs=1): err= 0: pid=69798: Wed Nov 20 11:24:17 2024 00:11:32.015 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:11:32.015 slat (nsec): min=13344, max=69659, avg=23421.88, stdev=5943.77 00:11:32.015 clat (usec): min=172, max=1224, avg=314.60, stdev=62.39 00:11:32.015 lat (usec): min=195, max=1272, avg=338.02, stdev=65.08 00:11:32.015 clat percentiles (usec): 00:11:32.015 | 1.00th=[ 227], 5.00th=[ 262], 10.00th=[ 269], 20.00th=[ 277], 00:11:32.015 | 30.00th=[ 281], 40.00th=[ 289], 50.00th=[ 293], 60.00th=[ 302], 00:11:32.015 | 70.00th=[ 326], 80.00th=[ 367], 90.00th=[ 383], 95.00th=[ 396], 00:11:32.015 | 99.00th=[ 482], 99.50th=[ 660], 99.90th=[ 963], 99.95th=[ 1221], 00:11:32.015 | 99.99th=[ 1221] 00:11:32.015 write: IOPS=1790, BW=7161KiB/s (7333kB/s)(7168KiB/1001msec); 0 zone resets 00:11:32.015 slat (nsec): min=19523, max=93486, avg=33842.46, stdev=8394.56 00:11:32.015 clat (usec): min=113, max=1416, avg=229.41, stdev=60.74 00:11:32.015 lat (usec): min=150, max=1464, avg=263.25, stdev=62.86 00:11:32.015 clat percentiles (usec): 00:11:32.015 | 1.00th=[ 145], 5.00th=[ 192], 10.00th=[ 196], 20.00th=[ 206], 00:11:32.015 | 30.00th=[ 212], 40.00th=[ 219], 50.00th=[ 225], 60.00th=[ 231], 00:11:32.015 | 70.00th=[ 237], 80.00th=[ 245], 90.00th=[ 258], 95.00th=[ 269], 00:11:32.015 | 99.00th=[ 359], 99.50th=[ 709], 99.90th=[ 1156], 99.95th=[ 1418], 00:11:32.015 | 99.99th=[ 1418] 00:11:32.015 bw ( KiB/s): min= 8192, max= 8192, per=26.68%, avg=8192.00, stdev= 0.00, samples=1 00:11:32.015 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:32.015 lat (usec) : 250=46.78%, 500=52.46%, 750=0.36%, 1000=0.30% 00:11:32.015 lat (msec) : 2=0.09% 00:11:32.015 cpu : usr=1.70%, sys=7.70%, ctx=3328, majf=0, minf=11 00:11:32.015 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:32.015 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:32.015 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:32.015 issued rwts: total=1536,1792,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:32.015 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:32.015 job1: (groupid=0, jobs=1): err= 0: pid=69799: Wed Nov 20 11:24:17 2024 00:11:32.015 read: IOPS=1807, BW=7229KiB/s (7402kB/s)(7236KiB/1001msec) 00:11:32.015 slat (nsec): min=11149, max=68175, avg=19257.71, stdev=6788.06 00:11:32.015 clat (usec): min=141, max=6756, avg=279.41, stdev=259.31 00:11:32.015 lat (usec): min=156, max=6774, avg=298.67, stdev=260.01 00:11:32.015 clat percentiles (usec): 00:11:32.015 | 1.00th=[ 151], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 172], 00:11:32.015 | 30.00th=[ 249], 40.00th=[ 262], 50.00th=[ 269], 60.00th=[ 277], 00:11:32.015 | 70.00th=[ 285], 80.00th=[ 326], 90.00th=[ 375], 95.00th=[ 392], 00:11:32.015 | 99.00th=[ 420], 99.50th=[ 742], 99.90th=[ 3818], 99.95th=[ 6783], 00:11:32.015 | 99.99th=[ 6783] 00:11:32.015 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:11:32.015 slat (usec): min=12, max=136, avg=28.99, stdev=10.34 00:11:32.015 clat (usec): min=103, max=543, avg=191.38, stdev=55.97 00:11:32.015 lat (usec): min=126, max=575, avg=220.37, stdev=59.38 00:11:32.015 clat percentiles (usec): 00:11:32.015 | 1.00th=[ 114], 5.00th=[ 120], 10.00th=[ 125], 20.00th=[ 135], 00:11:32.015 | 30.00th=[ 169], 40.00th=[ 190], 50.00th=[ 196], 60.00th=[ 202], 00:11:32.015 | 70.00th=[ 210], 80.00th=[ 221], 90.00th=[ 247], 95.00th=[ 269], 00:11:32.015 | 99.00th=[ 457], 99.50th=[ 490], 99.90th=[ 506], 99.95th=[ 537], 00:11:32.015 | 99.99th=[ 545] 00:11:32.015 bw ( KiB/s): min= 8192, max= 8192, per=26.68%, avg=8192.00, stdev= 0.00, samples=1 00:11:32.015 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:32.015 lat (usec) : 250=62.56%, 500=37.05%, 750=0.16% 00:11:32.015 lat (msec) : 2=0.03%, 4=0.18%, 10=0.03% 00:11:32.015 cpu : usr=1.40%, sys=8.20%, ctx=3870, majf=0, minf=9 00:11:32.015 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:32.015 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:32.015 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:32.015 issued rwts: total=1809,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:32.015 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:32.015 job2: (groupid=0, jobs=1): err= 0: pid=69800: Wed Nov 20 11:24:17 2024 00:11:32.015 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:11:32.015 slat (nsec): min=15341, max=78594, avg=25725.96, stdev=6628.74 00:11:32.015 clat (usec): min=174, max=1043, avg=312.55, stdev=69.79 00:11:32.015 lat (usec): min=194, max=1086, avg=338.28, stdev=69.97 00:11:32.015 clat percentiles (usec): 00:11:32.015 | 1.00th=[ 235], 5.00th=[ 249], 10.00th=[ 258], 20.00th=[ 269], 00:11:32.015 | 30.00th=[ 277], 40.00th=[ 285], 50.00th=[ 293], 60.00th=[ 302], 00:11:32.015 | 70.00th=[ 322], 80.00th=[ 371], 90.00th=[ 388], 95.00th=[ 400], 00:11:32.015 | 99.00th=[ 660], 99.50th=[ 717], 99.90th=[ 898], 99.95th=[ 1045], 00:11:32.015 | 99.99th=[ 1045] 00:11:32.015 write: IOPS=1793, BW=7173KiB/s (7345kB/s)(7180KiB/1001msec); 0 zone resets 00:11:32.015 slat (nsec): min=19294, max=97211, avg=31505.68, stdev=5934.90 00:11:32.016 clat (usec): min=126, max=2846, avg=231.35, stdev=76.22 00:11:32.016 lat (usec): min=154, max=2893, avg=262.86, stdev=77.28 00:11:32.016 clat percentiles (usec): 00:11:32.016 | 1.00th=[ 153], 5.00th=[ 190], 10.00th=[ 196], 20.00th=[ 204], 00:11:32.016 | 30.00th=[ 212], 40.00th=[ 219], 50.00th=[ 227], 60.00th=[ 235], 00:11:32.016 | 70.00th=[ 243], 80.00th=[ 251], 90.00th=[ 265], 95.00th=[ 277], 00:11:32.016 | 99.00th=[ 334], 99.50th=[ 449], 99.90th=[ 1123], 99.95th=[ 2835], 00:11:32.016 | 99.99th=[ 2835] 00:11:32.016 bw ( KiB/s): min= 8192, max= 8192, per=26.68%, avg=8192.00, stdev= 0.00, samples=1 00:11:32.016 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:32.016 lat (usec) : 250=45.18%, 500=53.92%, 750=0.63%, 1000=0.18% 00:11:32.016 lat (msec) : 2=0.06%, 4=0.03% 00:11:32.016 cpu : usr=1.90%, sys=7.30%, ctx=3343, majf=0, minf=9 00:11:32.016 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:32.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:32.016 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:32.016 issued rwts: total=1536,1795,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:32.016 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:32.016 job3: (groupid=0, jobs=1): err= 0: pid=69801: Wed Nov 20 11:24:17 2024 00:11:32.016 read: IOPS=1764, BW=7057KiB/s (7226kB/s)(7064KiB/1001msec) 00:11:32.016 slat (nsec): min=10865, max=48697, avg=22136.84, stdev=6860.88 00:11:32.016 clat (usec): min=160, max=669, avg=273.30, stdev=73.67 00:11:32.016 lat (usec): min=184, max=707, avg=295.44, stdev=72.85 00:11:32.016 clat percentiles (usec): 00:11:32.016 | 1.00th=[ 167], 5.00th=[ 174], 10.00th=[ 178], 20.00th=[ 190], 00:11:32.016 | 30.00th=[ 251], 40.00th=[ 265], 50.00th=[ 273], 60.00th=[ 277], 00:11:32.016 | 70.00th=[ 289], 80.00th=[ 334], 90.00th=[ 379], 95.00th=[ 396], 00:11:32.016 | 99.00th=[ 469], 99.50th=[ 502], 99.90th=[ 660], 99.95th=[ 668], 00:11:32.016 | 99.99th=[ 668] 00:11:32.016 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:11:32.016 slat (usec): min=12, max=158, avg=33.07, stdev=10.50 00:11:32.016 clat (usec): min=120, max=733, avg=195.75, stdev=42.23 00:11:32.016 lat (usec): min=143, max=776, avg=228.82, stdev=40.90 00:11:32.016 clat percentiles (usec): 00:11:32.016 | 1.00th=[ 127], 5.00th=[ 135], 10.00th=[ 143], 20.00th=[ 153], 00:11:32.016 | 30.00th=[ 180], 40.00th=[ 190], 50.00th=[ 196], 60.00th=[ 204], 00:11:32.016 | 70.00th=[ 212], 80.00th=[ 223], 90.00th=[ 251], 95.00th=[ 265], 00:11:32.016 | 99.00th=[ 289], 99.50th=[ 306], 99.90th=[ 494], 99.95th=[ 510], 00:11:32.016 | 99.99th=[ 734] 00:11:32.016 bw ( KiB/s): min= 8192, max= 8192, per=26.68%, avg=8192.00, stdev= 0.00, samples=1 00:11:32.016 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:32.016 lat (usec) : 250=62.01%, 500=37.70%, 750=0.29% 00:11:32.016 cpu : usr=2.80%, sys=7.70%, ctx=3816, majf=0, minf=9 00:11:32.016 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:32.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:32.016 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:32.016 issued rwts: total=1766,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:32.016 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:32.016 00:11:32.016 Run status group 0 (all jobs): 00:11:32.016 READ: bw=25.9MiB/s (27.2MB/s), 6138KiB/s-7229KiB/s (6285kB/s-7402kB/s), io=26.0MiB (27.2MB), run=1001-1001msec 00:11:32.016 WRITE: bw=30.0MiB/s (31.4MB/s), 7161KiB/s-8184KiB/s (7333kB/s-8380kB/s), io=30.0MiB (31.5MB), run=1001-1001msec 00:11:32.016 00:11:32.016 Disk stats (read/write): 00:11:32.016 nvme0n1: ios=1291/1536, merge=0/0, ticks=436/384, in_queue=820, util=86.47% 00:11:32.016 nvme0n2: ios=1551/1736, merge=0/0, ticks=433/346, in_queue=779, util=85.73% 00:11:32.016 nvme0n3: ios=1240/1536, merge=0/0, ticks=410/376, in_queue=786, util=88.56% 00:11:32.016 nvme0n4: ios=1536/1679, merge=0/0, ticks=411/343, in_queue=754, util=89.62% 00:11:32.016 11:24:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:32.016 [global] 00:11:32.016 thread=1 00:11:32.016 invalidate=1 00:11:32.016 rw=randwrite 00:11:32.016 time_based=1 00:11:32.016 runtime=1 00:11:32.016 ioengine=libaio 00:11:32.016 direct=1 00:11:32.016 bs=4096 00:11:32.016 iodepth=1 00:11:32.016 norandommap=0 00:11:32.016 numjobs=1 00:11:32.016 00:11:32.016 verify_dump=1 00:11:32.016 verify_backlog=512 00:11:32.016 verify_state_save=0 00:11:32.016 do_verify=1 00:11:32.016 verify=crc32c-intel 00:11:32.016 [job0] 00:11:32.016 filename=/dev/nvme0n1 00:11:32.016 [job1] 00:11:32.016 filename=/dev/nvme0n2 00:11:32.016 [job2] 00:11:32.016 filename=/dev/nvme0n3 00:11:32.016 [job3] 00:11:32.016 filename=/dev/nvme0n4 00:11:32.016 Could not set queue depth (nvme0n1) 00:11:32.016 Could not set queue depth (nvme0n2) 00:11:32.016 Could not set queue depth (nvme0n3) 00:11:32.016 Could not set queue depth (nvme0n4) 00:11:32.016 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:32.016 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:32.016 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:32.016 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:32.016 fio-3.35 00:11:32.016 Starting 4 threads 00:11:33.387 00:11:33.387 job0: (groupid=0, jobs=1): err= 0: pid=69861: Wed Nov 20 11:24:18 2024 00:11:33.387 read: IOPS=706, BW=2825KiB/s (2893kB/s)(2828KiB/1001msec) 00:11:33.387 slat (nsec): min=17097, max=94033, avg=33904.98, stdev=11767.07 00:11:33.387 clat (usec): min=227, max=2258, avg=629.23, stdev=142.08 00:11:33.387 lat (usec): min=256, max=2292, avg=663.13, stdev=143.14 00:11:33.387 clat percentiles (usec): 00:11:33.387 | 1.00th=[ 355], 5.00th=[ 416], 10.00th=[ 433], 20.00th=[ 570], 00:11:33.387 | 30.00th=[ 627], 40.00th=[ 644], 50.00th=[ 660], 60.00th=[ 668], 00:11:33.387 | 70.00th=[ 676], 80.00th=[ 693], 90.00th=[ 709], 95.00th=[ 717], 00:11:33.387 | 99.00th=[ 930], 99.50th=[ 1319], 99.90th=[ 2245], 99.95th=[ 2245], 00:11:33.387 | 99.99th=[ 2245] 00:11:33.387 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:11:33.387 slat (usec): min=25, max=118, avg=44.32, stdev=12.64 00:11:33.387 clat (usec): min=166, max=3276, avg=467.66, stdev=148.89 00:11:33.387 lat (usec): min=197, max=3347, avg=511.98, stdev=150.80 00:11:33.387 clat percentiles (usec): 00:11:33.387 | 1.00th=[ 208], 5.00th=[ 310], 10.00th=[ 326], 20.00th=[ 343], 00:11:33.387 | 30.00th=[ 371], 40.00th=[ 461], 50.00th=[ 482], 60.00th=[ 510], 00:11:33.387 | 70.00th=[ 537], 80.00th=[ 553], 90.00th=[ 570], 95.00th=[ 594], 00:11:33.387 | 99.00th=[ 914], 99.50th=[ 1004], 99.90th=[ 1975], 99.95th=[ 3261], 00:11:33.387 | 99.99th=[ 3261] 00:11:33.387 bw ( KiB/s): min= 4096, max= 4096, per=17.19%, avg=4096.00, stdev= 0.00, samples=1 00:11:33.387 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:33.387 lat (usec) : 250=0.92%, 500=40.50%, 750=57.31%, 1000=0.46% 00:11:33.387 lat (msec) : 2=0.69%, 4=0.12% 00:11:33.387 cpu : usr=1.70%, sys=5.10%, ctx=1733, majf=0, minf=13 00:11:33.387 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:33.387 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:33.387 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:33.387 issued rwts: total=707,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:33.387 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:33.387 job1: (groupid=0, jobs=1): err= 0: pid=69865: Wed Nov 20 11:24:18 2024 00:11:33.387 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:11:33.387 slat (nsec): min=10977, max=97376, avg=15274.91, stdev=4950.55 00:11:33.387 clat (usec): min=184, max=1915, avg=296.69, stdev=49.04 00:11:33.387 lat (usec): min=197, max=1929, avg=311.96, stdev=49.17 00:11:33.387 clat percentiles (usec): 00:11:33.387 | 1.00th=[ 251], 5.00th=[ 269], 10.00th=[ 273], 20.00th=[ 281], 00:11:33.387 | 30.00th=[ 285], 40.00th=[ 285], 50.00th=[ 289], 60.00th=[ 297], 00:11:33.387 | 70.00th=[ 302], 80.00th=[ 310], 90.00th=[ 326], 95.00th=[ 343], 00:11:33.387 | 99.00th=[ 396], 99.50th=[ 429], 99.90th=[ 545], 99.95th=[ 1909], 00:11:33.387 | 99.99th=[ 1909] 00:11:33.387 write: IOPS=1956, BW=7824KiB/s (8012kB/s)(7832KiB/1001msec); 0 zone resets 00:11:33.388 slat (nsec): min=10954, max=59281, avg=25207.79, stdev=6124.63 00:11:33.388 clat (usec): min=118, max=1176, avg=237.70, stdev=68.64 00:11:33.388 lat (usec): min=141, max=1225, avg=262.90, stdev=71.07 00:11:33.388 clat percentiles (usec): 00:11:33.388 | 1.00th=[ 190], 5.00th=[ 200], 10.00th=[ 204], 20.00th=[ 210], 00:11:33.388 | 30.00th=[ 212], 40.00th=[ 217], 50.00th=[ 221], 60.00th=[ 225], 00:11:33.388 | 70.00th=[ 229], 80.00th=[ 237], 90.00th=[ 260], 95.00th=[ 449], 00:11:33.388 | 99.00th=[ 515], 99.50th=[ 529], 99.90th=[ 807], 99.95th=[ 1172], 00:11:33.388 | 99.99th=[ 1172] 00:11:33.388 bw ( KiB/s): min= 8192, max= 8192, per=34.37%, avg=8192.00, stdev= 0.00, samples=1 00:11:33.388 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:33.388 lat (usec) : 250=49.46%, 500=49.63%, 750=0.83%, 1000=0.03% 00:11:33.388 lat (msec) : 2=0.06% 00:11:33.388 cpu : usr=1.50%, sys=5.60%, ctx=3498, majf=0, minf=11 00:11:33.388 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:33.388 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:33.388 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:33.388 issued rwts: total=1536,1958,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:33.388 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:33.388 job2: (groupid=0, jobs=1): err= 0: pid=69866: Wed Nov 20 11:24:18 2024 00:11:33.388 read: IOPS=651, BW=2605KiB/s (2668kB/s)(2608KiB/1001msec) 00:11:33.388 slat (usec): min=19, max=104, avg=38.65, stdev=13.74 00:11:33.388 clat (usec): min=272, max=2359, avg=661.69, stdev=157.51 00:11:33.388 lat (usec): min=295, max=2383, avg=700.34, stdev=160.68 00:11:33.388 clat percentiles (usec): 00:11:33.388 | 1.00th=[ 412], 5.00th=[ 437], 10.00th=[ 453], 20.00th=[ 611], 00:11:33.388 | 30.00th=[ 635], 40.00th=[ 652], 50.00th=[ 668], 60.00th=[ 676], 00:11:33.388 | 70.00th=[ 685], 80.00th=[ 701], 90.00th=[ 742], 95.00th=[ 922], 00:11:33.388 | 99.00th=[ 1106], 99.50th=[ 1385], 99.90th=[ 2376], 99.95th=[ 2376], 00:11:33.388 | 99.99th=[ 2376] 00:11:33.388 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:11:33.388 slat (usec): min=25, max=118, avg=46.99, stdev=12.04 00:11:33.388 clat (usec): min=212, max=1701, avg=476.47, stdev=122.60 00:11:33.388 lat (usec): min=270, max=1743, avg=523.46, stdev=120.68 00:11:33.388 clat percentiles (usec): 00:11:33.388 | 1.00th=[ 251], 5.00th=[ 285], 10.00th=[ 310], 20.00th=[ 363], 00:11:33.388 | 30.00th=[ 433], 40.00th=[ 461], 50.00th=[ 482], 60.00th=[ 523], 00:11:33.388 | 70.00th=[ 545], 80.00th=[ 562], 90.00th=[ 586], 95.00th=[ 627], 00:11:33.388 | 99.00th=[ 725], 99.50th=[ 1106], 99.90th=[ 1205], 99.95th=[ 1696], 00:11:33.388 | 99.99th=[ 1696] 00:11:33.388 bw ( KiB/s): min= 4096, max= 4096, per=17.19%, avg=4096.00, stdev= 0.00, samples=1 00:11:33.388 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:33.388 lat (usec) : 250=0.60%, 500=37.47%, 750=57.88%, 1000=3.16% 00:11:33.388 lat (msec) : 2=0.78%, 4=0.12% 00:11:33.388 cpu : usr=1.80%, sys=5.40%, ctx=1677, majf=0, minf=10 00:11:33.388 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:33.388 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:33.388 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:33.388 issued rwts: total=652,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:33.388 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:33.388 job3: (groupid=0, jobs=1): err= 0: pid=69867: Wed Nov 20 11:24:18 2024 00:11:33.388 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:11:33.388 slat (usec): min=11, max=179, avg=15.12, stdev= 6.05 00:11:33.388 clat (usec): min=55, max=1778, avg=296.66, stdev=45.93 00:11:33.388 lat (usec): min=197, max=1790, avg=311.78, stdev=45.75 00:11:33.388 clat percentiles (usec): 00:11:33.388 | 1.00th=[ 258], 5.00th=[ 273], 10.00th=[ 277], 20.00th=[ 281], 00:11:33.388 | 30.00th=[ 281], 40.00th=[ 285], 50.00th=[ 289], 60.00th=[ 297], 00:11:33.388 | 70.00th=[ 302], 80.00th=[ 310], 90.00th=[ 326], 95.00th=[ 343], 00:11:33.388 | 99.00th=[ 392], 99.50th=[ 412], 99.90th=[ 478], 99.95th=[ 1778], 00:11:33.388 | 99.99th=[ 1778] 00:11:33.388 write: IOPS=1956, BW=7824KiB/s (8012kB/s)(7832KiB/1001msec); 0 zone resets 00:11:33.388 slat (usec): min=14, max=108, avg=25.36, stdev= 6.24 00:11:33.388 clat (usec): min=130, max=991, avg=237.46, stdev=66.99 00:11:33.388 lat (usec): min=162, max=1041, avg=262.82, stdev=69.33 00:11:33.388 clat percentiles (usec): 00:11:33.388 | 1.00th=[ 192], 5.00th=[ 202], 10.00th=[ 206], 20.00th=[ 210], 00:11:33.388 | 30.00th=[ 212], 40.00th=[ 217], 50.00th=[ 221], 60.00th=[ 225], 00:11:33.388 | 70.00th=[ 229], 80.00th=[ 237], 90.00th=[ 258], 95.00th=[ 457], 00:11:33.388 | 99.00th=[ 506], 99.50th=[ 510], 99.90th=[ 742], 99.95th=[ 996], 00:11:33.388 | 99.99th=[ 996] 00:11:33.388 bw ( KiB/s): min= 8192, max= 8192, per=34.37%, avg=8192.00, stdev= 0.00, samples=1 00:11:33.388 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:33.388 lat (usec) : 100=0.03%, 250=49.28%, 500=49.89%, 750=0.74%, 1000=0.03% 00:11:33.388 lat (msec) : 2=0.03% 00:11:33.388 cpu : usr=1.40%, sys=5.90%, ctx=3495, majf=0, minf=15 00:11:33.388 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:33.388 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:33.388 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:33.388 issued rwts: total=1536,1958,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:33.388 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:33.388 00:11:33.388 Run status group 0 (all jobs): 00:11:33.388 READ: bw=17.3MiB/s (18.1MB/s), 2605KiB/s-6138KiB/s (2668kB/s-6285kB/s), io=17.3MiB (18.1MB), run=1001-1001msec 00:11:33.388 WRITE: bw=23.3MiB/s (24.4MB/s), 4092KiB/s-7824KiB/s (4190kB/s-8012kB/s), io=23.3MiB (24.4MB), run=1001-1001msec 00:11:33.388 00:11:33.388 Disk stats (read/write): 00:11:33.388 nvme0n1: ios=562/1008, merge=0/0, ticks=361/486, in_queue=847, util=88.18% 00:11:33.388 nvme0n2: ios=1532/1536, merge=0/0, ticks=468/357, in_queue=825, util=87.53% 00:11:33.388 nvme0n3: ios=512/954, merge=0/0, ticks=332/453, in_queue=785, util=89.09% 00:11:33.388 nvme0n4: ios=1504/1536, merge=0/0, ticks=441/365, in_queue=806, util=89.74% 00:11:33.388 11:24:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:33.388 [global] 00:11:33.388 thread=1 00:11:33.388 invalidate=1 00:11:33.388 rw=write 00:11:33.388 time_based=1 00:11:33.388 runtime=1 00:11:33.388 ioengine=libaio 00:11:33.388 direct=1 00:11:33.388 bs=4096 00:11:33.388 iodepth=128 00:11:33.388 norandommap=0 00:11:33.388 numjobs=1 00:11:33.388 00:11:33.388 verify_dump=1 00:11:33.388 verify_backlog=512 00:11:33.388 verify_state_save=0 00:11:33.388 do_verify=1 00:11:33.388 verify=crc32c-intel 00:11:33.388 [job0] 00:11:33.388 filename=/dev/nvme0n1 00:11:33.388 [job1] 00:11:33.388 filename=/dev/nvme0n2 00:11:33.388 [job2] 00:11:33.388 filename=/dev/nvme0n3 00:11:33.388 [job3] 00:11:33.388 filename=/dev/nvme0n4 00:11:33.388 Could not set queue depth (nvme0n1) 00:11:33.388 Could not set queue depth (nvme0n2) 00:11:33.388 Could not set queue depth (nvme0n3) 00:11:33.388 Could not set queue depth (nvme0n4) 00:11:33.388 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:33.388 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:33.388 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:33.388 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:33.388 fio-3.35 00:11:33.388 Starting 4 threads 00:11:34.761 00:11:34.761 job0: (groupid=0, jobs=1): err= 0: pid=69923: Wed Nov 20 11:24:20 2024 00:11:34.761 read: IOPS=1005, BW=4024KiB/s (4120kB/s)(4096KiB/1018msec) 00:11:34.761 slat (usec): min=6, max=15661, avg=261.45, stdev=1376.34 00:11:34.761 clat (usec): min=21029, max=55226, avg=31632.26, stdev=6533.72 00:11:34.761 lat (usec): min=21045, max=55242, avg=31893.71, stdev=6668.27 00:11:34.761 clat percentiles (usec): 00:11:34.761 | 1.00th=[23987], 5.00th=[24249], 10.00th=[25560], 20.00th=[26608], 00:11:34.761 | 30.00th=[28181], 40.00th=[28967], 50.00th=[29230], 60.00th=[29754], 00:11:34.761 | 70.00th=[33817], 80.00th=[34866], 90.00th=[43779], 95.00th=[44303], 00:11:34.761 | 99.00th=[51643], 99.50th=[55313], 99.90th=[55313], 99.95th=[55313], 00:11:34.761 | 99.99th=[55313] 00:11:34.761 write: IOPS=1468, BW=5874KiB/s (6015kB/s)(5980KiB/1018msec); 0 zone resets 00:11:34.761 slat (usec): min=11, max=16749, avg=488.59, stdev=1897.36 00:11:34.761 clat (msec): min=8, max=126, avg=63.86, stdev=32.49 00:11:34.761 lat (msec): min=18, max=126, avg=64.35, stdev=32.68 00:11:34.761 clat percentiles (msec): 00:11:34.761 | 1.00th=[ 22], 5.00th=[ 28], 10.00th=[ 28], 20.00th=[ 29], 00:11:34.761 | 30.00th=[ 33], 40.00th=[ 51], 50.00th=[ 61], 60.00th=[ 68], 00:11:34.761 | 70.00th=[ 84], 80.00th=[ 99], 90.00th=[ 114], 95.00th=[ 122], 00:11:34.761 | 99.00th=[ 123], 99.50th=[ 125], 99.90th=[ 127], 99.95th=[ 127], 00:11:34.761 | 99.99th=[ 127] 00:11:34.761 bw ( KiB/s): min= 3712, max= 7238, per=14.16%, avg=5475.00, stdev=2493.26, samples=2 00:11:34.761 iops : min= 928, max= 1809, avg=1368.50, stdev=622.96, samples=2 00:11:34.761 lat (msec) : 10=0.04%, 20=0.32%, 50=62.60%, 100=25.25%, 250=11.79% 00:11:34.761 cpu : usr=0.98%, sys=3.83%, ctx=161, majf=0, minf=3 00:11:34.761 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:11:34.761 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:34.761 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:34.761 issued rwts: total=1024,1495,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:34.761 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:34.761 job1: (groupid=0, jobs=1): err= 0: pid=69924: Wed Nov 20 11:24:20 2024 00:11:34.761 read: IOPS=4553, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1012msec) 00:11:34.761 slat (usec): min=3, max=21405, avg=126.68, stdev=999.12 00:11:34.761 clat (usec): min=4036, max=71272, avg=16438.65, stdev=15430.29 00:11:34.761 lat (usec): min=4051, max=71289, avg=16565.33, stdev=15531.23 00:11:34.761 clat percentiles (usec): 00:11:34.761 | 1.00th=[ 7635], 5.00th=[ 8029], 10.00th=[ 8717], 20.00th=[ 9372], 00:11:34.761 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10683], 60.00th=[11338], 00:11:34.761 | 70.00th=[12387], 80.00th=[14615], 90.00th=[50070], 95.00th=[60556], 00:11:34.761 | 99.00th=[65799], 99.50th=[68682], 99.90th=[70779], 99.95th=[70779], 00:11:34.761 | 99.99th=[70779] 00:11:34.761 write: IOPS=4701, BW=18.4MiB/s (19.3MB/s)(18.6MiB/1012msec); 0 zone resets 00:11:34.761 slat (usec): min=4, max=10283, avg=81.44, stdev=495.03 00:11:34.761 clat (usec): min=2948, max=56804, avg=11036.64, stdev=5820.43 00:11:34.761 lat (usec): min=3580, max=56816, avg=11118.08, stdev=5860.85 00:11:34.761 clat percentiles (usec): 00:11:34.761 | 1.00th=[ 3982], 5.00th=[ 5735], 10.00th=[ 7570], 20.00th=[ 9110], 00:11:34.761 | 30.00th=[ 9503], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[10552], 00:11:34.761 | 70.00th=[10814], 80.00th=[11469], 90.00th=[14091], 95.00th=[14615], 00:11:34.761 | 99.00th=[43254], 99.50th=[47449], 99.90th=[56886], 99.95th=[56886], 00:11:34.761 | 99.99th=[56886] 00:11:34.761 bw ( KiB/s): min=12464, max=24625, per=47.98%, avg=18544.50, stdev=8599.13, samples=2 00:11:34.761 iops : min= 3116, max= 6156, avg=4636.00, stdev=2149.60, samples=2 00:11:34.761 lat (msec) : 4=0.54%, 10=41.03%, 20=49.93%, 50=3.45%, 100=5.05% 00:11:34.761 cpu : usr=4.65%, sys=10.19%, ctx=532, majf=0, minf=4 00:11:34.761 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:11:34.761 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:34.761 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:34.761 issued rwts: total=4608,4758,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:34.761 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:34.761 job2: (groupid=0, jobs=1): err= 0: pid=69925: Wed Nov 20 11:24:20 2024 00:11:34.761 read: IOPS=1512, BW=6051KiB/s (6197kB/s)(6136KiB/1014msec) 00:11:34.761 slat (usec): min=5, max=22936, avg=377.04, stdev=1898.72 00:11:34.761 clat (usec): min=1766, max=72692, avg=45119.70, stdev=14652.39 00:11:34.761 lat (usec): min=14534, max=72710, avg=45496.74, stdev=14639.68 00:11:34.761 clat percentiles (usec): 00:11:34.761 | 1.00th=[14746], 5.00th=[27919], 10.00th=[30540], 20.00th=[31851], 00:11:34.761 | 30.00th=[32900], 40.00th=[33162], 50.00th=[43254], 60.00th=[50070], 00:11:34.761 | 70.00th=[58459], 80.00th=[60556], 90.00th=[60556], 95.00th=[69731], 00:11:34.761 | 99.00th=[72877], 99.50th=[72877], 99.90th=[72877], 99.95th=[72877], 00:11:34.761 | 99.99th=[72877] 00:11:34.761 write: IOPS=1514, BW=6059KiB/s (6205kB/s)(6144KiB/1014msec); 0 zone resets 00:11:34.761 slat (usec): min=11, max=13555, avg=274.01, stdev=1459.15 00:11:34.761 clat (usec): min=19309, max=74999, avg=37544.33, stdev=13582.66 00:11:34.761 lat (usec): min=21385, max=75022, avg=37818.34, stdev=13586.70 00:11:34.761 clat percentiles (usec): 00:11:34.761 | 1.00th=[20579], 5.00th=[25035], 10.00th=[25297], 20.00th=[25822], 00:11:34.761 | 30.00th=[26084], 40.00th=[26608], 50.00th=[31327], 60.00th=[38536], 00:11:34.761 | 70.00th=[46924], 80.00th=[54789], 90.00th=[56361], 95.00th=[61604], 00:11:34.761 | 99.00th=[70779], 99.50th=[74974], 99.90th=[74974], 99.95th=[74974], 00:11:34.761 | 99.99th=[74974] 00:11:34.761 bw ( KiB/s): min= 4096, max= 8192, per=15.90%, avg=6144.00, stdev=2896.31, samples=2 00:11:34.761 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:11:34.761 lat (msec) : 2=0.03%, 20=1.27%, 50=67.10%, 100=31.60% 00:11:34.761 cpu : usr=1.48%, sys=4.34%, ctx=151, majf=0, minf=5 00:11:34.761 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=97.9% 00:11:34.761 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:34.761 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:34.761 issued rwts: total=1534,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:34.761 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:34.761 job3: (groupid=0, jobs=1): err= 0: pid=69926: Wed Nov 20 11:24:20 2024 00:11:34.761 read: IOPS=1973, BW=7892KiB/s (8082kB/s)(7916KiB/1003msec) 00:11:34.762 slat (usec): min=4, max=8064, avg=250.60, stdev=1052.54 00:11:34.762 clat (usec): min=1065, max=40344, avg=30373.87, stdev=5635.96 00:11:34.762 lat (usec): min=8012, max=40359, avg=30624.47, stdev=5601.55 00:11:34.762 clat percentiles (usec): 00:11:34.762 | 1.00th=[ 8356], 5.00th=[20579], 10.00th=[22414], 20.00th=[26084], 00:11:34.762 | 30.00th=[30278], 40.00th=[31327], 50.00th=[31851], 60.00th=[32900], 00:11:34.762 | 70.00th=[33817], 80.00th=[34341], 90.00th=[35390], 95.00th=[35914], 00:11:34.762 | 99.00th=[39060], 99.50th=[40109], 99.90th=[40109], 99.95th=[40109], 00:11:34.762 | 99.99th=[40109] 00:11:34.762 write: IOPS=2041, BW=8167KiB/s (8364kB/s)(8192KiB/1003msec); 0 zone resets 00:11:34.762 slat (usec): min=11, max=8151, avg=238.39, stdev=1090.29 00:11:34.762 clat (usec): min=21236, max=39740, avg=31992.52, stdev=2662.34 00:11:34.762 lat (usec): min=21909, max=39758, avg=32230.91, stdev=2471.24 00:11:34.762 clat percentiles (usec): 00:11:34.762 | 1.00th=[23987], 5.00th=[27657], 10.00th=[29492], 20.00th=[29754], 00:11:34.762 | 30.00th=[30540], 40.00th=[31589], 50.00th=[32637], 60.00th=[32900], 00:11:34.762 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34341], 95.00th=[35390], 00:11:34.762 | 99.00th=[39060], 99.50th=[39060], 99.90th=[39584], 99.95th=[39584], 00:11:34.762 | 99.99th=[39584] 00:11:34.762 bw ( KiB/s): min= 8192, max= 8208, per=21.21%, avg=8200.00, stdev=11.31, samples=2 00:11:34.762 iops : min= 2048, max= 2052, avg=2050.00, stdev= 2.83, samples=2 00:11:34.762 lat (msec) : 2=0.02%, 10=0.79%, 20=1.32%, 50=97.86% 00:11:34.762 cpu : usr=2.00%, sys=5.69%, ctx=235, majf=0, minf=1 00:11:34.762 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:11:34.762 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:34.762 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:34.762 issued rwts: total=1979,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:34.762 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:34.762 00:11:34.762 Run status group 0 (all jobs): 00:11:34.762 READ: bw=35.1MiB/s (36.8MB/s), 4024KiB/s-17.8MiB/s (4120kB/s-18.7MB/s), io=35.7MiB (37.5MB), run=1003-1018msec 00:11:34.762 WRITE: bw=37.7MiB/s (39.6MB/s), 5874KiB/s-18.4MiB/s (6015kB/s-19.3MB/s), io=38.4MiB (40.3MB), run=1003-1018msec 00:11:34.762 00:11:34.762 Disk stats (read/write): 00:11:34.762 nvme0n1: ios=1074/1271, merge=0/0, ticks=15864/35031, in_queue=50895, util=91.38% 00:11:34.762 nvme0n2: ios=4418/4608, merge=0/0, ticks=46307/43846, in_queue=90153, util=88.45% 00:11:34.762 nvme0n3: ios=1248/1536, merge=0/0, ticks=13304/12917, in_queue=26221, util=91.60% 00:11:34.762 nvme0n4: ios=1536/1888, merge=0/0, ticks=11903/14145, in_queue=26048, util=89.64% 00:11:34.762 11:24:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:34.762 [global] 00:11:34.762 thread=1 00:11:34.762 invalidate=1 00:11:34.762 rw=randwrite 00:11:34.762 time_based=1 00:11:34.762 runtime=1 00:11:34.762 ioengine=libaio 00:11:34.762 direct=1 00:11:34.762 bs=4096 00:11:34.762 iodepth=128 00:11:34.762 norandommap=0 00:11:34.762 numjobs=1 00:11:34.762 00:11:34.762 verify_dump=1 00:11:34.762 verify_backlog=512 00:11:34.762 verify_state_save=0 00:11:34.762 do_verify=1 00:11:34.762 verify=crc32c-intel 00:11:34.762 [job0] 00:11:34.762 filename=/dev/nvme0n1 00:11:34.762 [job1] 00:11:34.762 filename=/dev/nvme0n2 00:11:34.762 [job2] 00:11:34.762 filename=/dev/nvme0n3 00:11:34.762 [job3] 00:11:34.762 filename=/dev/nvme0n4 00:11:34.762 Could not set queue depth (nvme0n1) 00:11:34.762 Could not set queue depth (nvme0n2) 00:11:34.762 Could not set queue depth (nvme0n3) 00:11:34.762 Could not set queue depth (nvme0n4) 00:11:34.762 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:34.762 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:34.762 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:34.762 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:34.762 fio-3.35 00:11:34.762 Starting 4 threads 00:11:36.226 00:11:36.226 job0: (groupid=0, jobs=1): err= 0: pid=69979: Wed Nov 20 11:24:21 2024 00:11:36.226 read: IOPS=3264, BW=12.8MiB/s (13.4MB/s)(12.8MiB/1004msec) 00:11:36.226 slat (usec): min=3, max=13734, avg=162.71, stdev=951.11 00:11:36.226 clat (usec): min=1835, max=45764, avg=20100.50, stdev=7999.54 00:11:36.226 lat (usec): min=5641, max=45795, avg=20263.22, stdev=8075.07 00:11:36.226 clat percentiles (usec): 00:11:36.226 | 1.00th=[ 7308], 5.00th=[ 9896], 10.00th=[11600], 20.00th=[12780], 00:11:36.226 | 30.00th=[13435], 40.00th=[14746], 50.00th=[18220], 60.00th=[21890], 00:11:36.226 | 70.00th=[24773], 80.00th=[28705], 90.00th=[31327], 95.00th=[34341], 00:11:36.226 | 99.00th=[36963], 99.50th=[37487], 99.90th=[40109], 99.95th=[45351], 00:11:36.226 | 99.99th=[45876] 00:11:36.226 write: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec); 0 zone resets 00:11:36.226 slat (usec): min=4, max=10764, avg=123.33, stdev=559.39 00:11:36.226 clat (usec): min=3188, max=37885, avg=17094.19, stdev=5824.77 00:11:36.226 lat (usec): min=3213, max=37907, avg=17217.52, stdev=5865.88 00:11:36.226 clat percentiles (usec): 00:11:36.226 | 1.00th=[ 5342], 5.00th=[ 8717], 10.00th=[10945], 20.00th=[13042], 00:11:36.226 | 30.00th=[13435], 40.00th=[14091], 50.00th=[14746], 60.00th=[19006], 00:11:36.226 | 70.00th=[20579], 80.00th=[22414], 90.00th=[25035], 95.00th=[26608], 00:11:36.226 | 99.00th=[32637], 99.50th=[35914], 99.90th=[37487], 99.95th=[38011], 00:11:36.226 | 99.99th=[38011] 00:11:36.226 bw ( KiB/s): min=10968, max=17704, per=21.50%, avg=14336.00, stdev=4763.07, samples=2 00:11:36.226 iops : min= 2742, max= 4426, avg=3584.00, stdev=1190.77, samples=2 00:11:36.226 lat (msec) : 2=0.01%, 4=0.07%, 10=6.48%, 20=53.79%, 50=39.64% 00:11:36.226 cpu : usr=3.29%, sys=8.28%, ctx=693, majf=0, minf=13 00:11:36.226 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:11:36.226 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:36.226 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:36.226 issued rwts: total=3278,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:36.226 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:36.226 job1: (groupid=0, jobs=1): err= 0: pid=69980: Wed Nov 20 11:24:21 2024 00:11:36.226 read: IOPS=4701, BW=18.4MiB/s (19.3MB/s)(18.5MiB/1007msec) 00:11:36.226 slat (usec): min=3, max=12295, avg=113.08, stdev=740.49 00:11:36.226 clat (usec): min=1179, max=26534, avg=13777.39, stdev=3469.39 00:11:36.226 lat (usec): min=4712, max=26546, avg=13890.46, stdev=3508.37 00:11:36.226 clat percentiles (usec): 00:11:36.226 | 1.00th=[ 5669], 5.00th=[ 9110], 10.00th=[10290], 20.00th=[11076], 00:11:36.226 | 30.00th=[12125], 40.00th=[12649], 50.00th=[13173], 60.00th=[13698], 00:11:36.226 | 70.00th=[14484], 80.00th=[16319], 90.00th=[18744], 95.00th=[21103], 00:11:36.226 | 99.00th=[24249], 99.50th=[25035], 99.90th=[26608], 99.95th=[26608], 00:11:36.226 | 99.99th=[26608] 00:11:36.226 write: IOPS=5084, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1007msec); 0 zone resets 00:11:36.226 slat (usec): min=4, max=11025, avg=84.99, stdev=390.74 00:11:36.226 clat (usec): min=3693, max=26496, avg=12180.76, stdev=2714.22 00:11:36.226 lat (usec): min=3718, max=26504, avg=12265.75, stdev=2748.19 00:11:36.226 clat percentiles (usec): 00:11:36.226 | 1.00th=[ 4621], 5.00th=[ 6390], 10.00th=[ 7832], 20.00th=[10552], 00:11:36.226 | 30.00th=[11469], 40.00th=[12518], 50.00th=[13042], 60.00th=[13304], 00:11:36.226 | 70.00th=[13566], 80.00th=[14091], 90.00th=[14877], 95.00th=[15270], 00:11:36.226 | 99.00th=[16188], 99.50th=[16319], 99.90th=[24249], 99.95th=[25297], 00:11:36.226 | 99.99th=[26608] 00:11:36.226 bw ( KiB/s): min=20464, max=20480, per=30.70%, avg=20472.00, stdev=11.31, samples=2 00:11:36.226 iops : min= 5116, max= 5120, avg=5118.00, stdev= 2.83, samples=2 00:11:36.226 lat (msec) : 2=0.01%, 4=0.13%, 10=12.56%, 20=83.64%, 50=3.65% 00:11:36.226 cpu : usr=4.17%, sys=10.83%, ctx=654, majf=0, minf=17 00:11:36.226 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:11:36.226 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:36.226 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:36.226 issued rwts: total=4734,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:36.226 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:36.226 job2: (groupid=0, jobs=1): err= 0: pid=69981: Wed Nov 20 11:24:21 2024 00:11:36.226 read: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec) 00:11:36.226 slat (usec): min=3, max=6477, avg=108.07, stdev=522.01 00:11:36.226 clat (usec): min=7526, max=21168, avg=13717.58, stdev=1972.73 00:11:36.226 lat (usec): min=7546, max=21191, avg=13825.64, stdev=2015.25 00:11:36.226 clat percentiles (usec): 00:11:36.226 | 1.00th=[ 8979], 5.00th=[10159], 10.00th=[11600], 20.00th=[12125], 00:11:36.226 | 30.00th=[12780], 40.00th=[13042], 50.00th=[13698], 60.00th=[14222], 00:11:36.226 | 70.00th=[14746], 80.00th=[15270], 90.00th=[16188], 95.00th=[16909], 00:11:36.226 | 99.00th=[18744], 99.50th=[19530], 99.90th=[20579], 99.95th=[20579], 00:11:36.226 | 99.99th=[21103] 00:11:36.226 write: IOPS=4714, BW=18.4MiB/s (19.3MB/s)(18.5MiB/1006msec); 0 zone resets 00:11:36.226 slat (usec): min=10, max=6051, avg=98.18, stdev=425.28 00:11:36.226 clat (usec): min=5126, max=20484, avg=13481.06, stdev=1877.47 00:11:36.226 lat (usec): min=5770, max=21126, avg=13579.25, stdev=1913.55 00:11:36.226 clat percentiles (usec): 00:11:36.226 | 1.00th=[ 8356], 5.00th=[10290], 10.00th=[11469], 20.00th=[12387], 00:11:36.226 | 30.00th=[12780], 40.00th=[13173], 50.00th=[13435], 60.00th=[13698], 00:11:36.226 | 70.00th=[14091], 80.00th=[14746], 90.00th=[15664], 95.00th=[16450], 00:11:36.226 | 99.00th=[19006], 99.50th=[19792], 99.90th=[20317], 99.95th=[20317], 00:11:36.226 | 99.99th=[20579] 00:11:36.226 bw ( KiB/s): min=18296, max=18688, per=27.74%, avg=18492.00, stdev=277.19, samples=2 00:11:36.226 iops : min= 4574, max= 4672, avg=4623.00, stdev=69.30, samples=2 00:11:36.226 lat (msec) : 10=4.16%, 20=95.59%, 50=0.25% 00:11:36.226 cpu : usr=3.58%, sys=13.83%, ctx=601, majf=0, minf=12 00:11:36.226 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:11:36.226 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:36.226 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:36.226 issued rwts: total=4608,4743,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:36.226 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:36.226 job3: (groupid=0, jobs=1): err= 0: pid=69982: Wed Nov 20 11:24:21 2024 00:11:36.226 read: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec) 00:11:36.227 slat (usec): min=3, max=10192, avg=172.83, stdev=820.14 00:11:36.227 clat (usec): min=9183, max=39659, avg=21491.83, stdev=6690.69 00:11:36.227 lat (usec): min=9498, max=39676, avg=21664.66, stdev=6756.97 00:11:36.227 clat percentiles (usec): 00:11:36.227 | 1.00th=[10683], 5.00th=[11863], 10.00th=[14615], 20.00th=[15139], 00:11:36.227 | 30.00th=[15533], 40.00th=[16319], 50.00th=[20841], 60.00th=[25035], 00:11:36.227 | 70.00th=[27132], 80.00th=[29230], 90.00th=[30278], 95.00th=[30802], 00:11:36.227 | 99.00th=[34341], 99.50th=[35390], 99.90th=[36963], 99.95th=[38536], 00:11:36.227 | 99.99th=[39584] 00:11:36.227 write: IOPS=3328, BW=13.0MiB/s (13.6MB/s)(13.0MiB/1003msec); 0 zone resets 00:11:36.227 slat (usec): min=4, max=7137, avg=133.33, stdev=558.62 00:11:36.227 clat (usec): min=1663, max=32217, avg=18198.20, stdev=4817.18 00:11:36.227 lat (usec): min=6783, max=32237, avg=18331.53, stdev=4841.67 00:11:36.227 clat percentiles (usec): 00:11:36.227 | 1.00th=[ 9634], 5.00th=[12125], 10.00th=[13829], 20.00th=[14615], 00:11:36.227 | 30.00th=[14877], 40.00th=[15533], 50.00th=[16057], 60.00th=[18482], 00:11:36.227 | 70.00th=[20841], 80.00th=[22676], 90.00th=[25560], 95.00th=[27395], 00:11:36.227 | 99.00th=[29492], 99.50th=[30016], 99.90th=[31851], 99.95th=[32113], 00:11:36.227 | 99.99th=[32113] 00:11:36.227 bw ( KiB/s): min= 9296, max=16384, per=19.26%, avg=12840.00, stdev=5011.97, samples=2 00:11:36.227 iops : min= 2324, max= 4096, avg=3210.00, stdev=1252.99, samples=2 00:11:36.227 lat (msec) : 2=0.02%, 10=1.25%, 20=56.33%, 50=42.40% 00:11:36.227 cpu : usr=2.89%, sys=8.78%, ctx=707, majf=0, minf=9 00:11:36.227 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:11:36.227 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:36.227 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:36.227 issued rwts: total=3072,3338,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:36.227 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:36.227 00:11:36.227 Run status group 0 (all jobs): 00:11:36.227 READ: bw=60.9MiB/s (63.8MB/s), 12.0MiB/s-18.4MiB/s (12.5MB/s-19.3MB/s), io=61.3MiB (64.3MB), run=1003-1007msec 00:11:36.227 WRITE: bw=65.1MiB/s (68.3MB/s), 13.0MiB/s-19.9MiB/s (13.6MB/s-20.8MB/s), io=65.6MiB (68.8MB), run=1003-1007msec 00:11:36.227 00:11:36.227 Disk stats (read/write): 00:11:36.227 nvme0n1: ios=3079/3072, merge=0/0, ticks=35972/31917, in_queue=67889, util=88.38% 00:11:36.227 nvme0n2: ios=4143/4335, merge=0/0, ticks=53095/50106, in_queue=103201, util=87.78% 00:11:36.227 nvme0n3: ios=3741/4096, merge=0/0, ticks=25307/25136, in_queue=50443, util=89.00% 00:11:36.227 nvme0n4: ios=2649/3072, merge=0/0, ticks=21019/20833, in_queue=41852, util=89.66% 00:11:36.227 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:36.227 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=70002 00:11:36.227 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:36.227 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:36.227 [global] 00:11:36.227 thread=1 00:11:36.227 invalidate=1 00:11:36.227 rw=read 00:11:36.227 time_based=1 00:11:36.227 runtime=10 00:11:36.227 ioengine=libaio 00:11:36.227 direct=1 00:11:36.227 bs=4096 00:11:36.227 iodepth=1 00:11:36.227 norandommap=1 00:11:36.227 numjobs=1 00:11:36.227 00:11:36.227 [job0] 00:11:36.227 filename=/dev/nvme0n1 00:11:36.227 [job1] 00:11:36.227 filename=/dev/nvme0n2 00:11:36.227 [job2] 00:11:36.227 filename=/dev/nvme0n3 00:11:36.227 [job3] 00:11:36.227 filename=/dev/nvme0n4 00:11:36.227 Could not set queue depth (nvme0n1) 00:11:36.227 Could not set queue depth (nvme0n2) 00:11:36.227 Could not set queue depth (nvme0n3) 00:11:36.227 Could not set queue depth (nvme0n4) 00:11:36.227 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:36.227 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:36.227 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:36.227 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:36.227 fio-3.35 00:11:36.227 Starting 4 threads 00:11:39.509 11:24:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:39.509 fio: pid=70045, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:39.509 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=39211008, buflen=4096 00:11:39.509 11:24:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:39.767 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=65388544, buflen=4096 00:11:39.767 fio: pid=70044, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:39.767 11:24:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:39.767 11:24:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:40.025 fio: pid=70042, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:40.025 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=2678784, buflen=4096 00:11:40.025 11:24:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:40.025 11:24:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:40.284 fio: pid=70043, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:40.284 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=56193024, buflen=4096 00:11:40.284 00:11:40.284 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=70042: Wed Nov 20 11:24:25 2024 00:11:40.284 read: IOPS=4762, BW=18.6MiB/s (19.5MB/s)(66.6MiB/3578msec) 00:11:40.284 slat (usec): min=8, max=15713, avg=19.31, stdev=186.54 00:11:40.284 clat (usec): min=132, max=41113, avg=189.18, stdev=316.92 00:11:40.284 lat (usec): min=153, max=41122, avg=208.49, stdev=368.72 00:11:40.284 clat percentiles (usec): 00:11:40.284 | 1.00th=[ 157], 5.00th=[ 163], 10.00th=[ 165], 20.00th=[ 169], 00:11:40.284 | 30.00th=[ 174], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 182], 00:11:40.284 | 70.00th=[ 188], 80.00th=[ 194], 90.00th=[ 221], 95.00th=[ 251], 00:11:40.284 | 99.00th=[ 277], 99.50th=[ 289], 99.90th=[ 529], 99.95th=[ 848], 00:11:40.284 | 99.99th=[ 3818] 00:11:40.284 bw ( KiB/s): min=19594, max=20696, per=35.14%, avg=20095.00, stdev=426.66, samples=6 00:11:40.284 iops : min= 4898, max= 5174, avg=5023.67, stdev=106.78, samples=6 00:11:40.284 lat (usec) : 250=94.76%, 500=5.12%, 750=0.04%, 1000=0.02% 00:11:40.284 lat (msec) : 2=0.02%, 4=0.01%, 50=0.01% 00:11:40.284 cpu : usr=1.43%, sys=6.49%, ctx=17051, majf=0, minf=1 00:11:40.284 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:40.284 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:40.284 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:40.284 issued rwts: total=17039,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:40.284 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:40.284 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=70043: Wed Nov 20 11:24:25 2024 00:11:40.284 read: IOPS=3484, BW=13.6MiB/s (14.3MB/s)(53.6MiB/3938msec) 00:11:40.284 slat (usec): min=8, max=10397, avg=18.13, stdev=169.87 00:11:40.284 clat (usec): min=111, max=41016, avg=267.49, stdev=362.04 00:11:40.284 lat (usec): min=148, max=41031, avg=285.62, stdev=400.04 00:11:40.284 clat percentiles (usec): 00:11:40.284 | 1.00th=[ 143], 5.00th=[ 153], 10.00th=[ 165], 20.00th=[ 215], 00:11:40.284 | 30.00th=[ 265], 40.00th=[ 273], 50.00th=[ 281], 60.00th=[ 285], 00:11:40.284 | 70.00th=[ 289], 80.00th=[ 297], 90.00th=[ 306], 95.00th=[ 322], 00:11:40.284 | 99.00th=[ 388], 99.50th=[ 404], 99.90th=[ 857], 99.95th=[ 2278], 00:11:40.284 | 99.99th=[ 5800] 00:11:40.284 bw ( KiB/s): min=12720, max=14155, per=23.25%, avg=13293.29, stdev=460.88, samples=7 00:11:40.284 iops : min= 3180, max= 3538, avg=3323.00, stdev=114.93, samples=7 00:11:40.284 lat (usec) : 250=26.13%, 500=73.71%, 750=0.04%, 1000=0.02% 00:11:40.284 lat (msec) : 2=0.04%, 4=0.02%, 10=0.02%, 50=0.01% 00:11:40.284 cpu : usr=1.04%, sys=4.47%, ctx=13748, majf=0, minf=1 00:11:40.284 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:40.284 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:40.284 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:40.284 issued rwts: total=13720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:40.284 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:40.284 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=70044: Wed Nov 20 11:24:25 2024 00:11:40.284 read: IOPS=4877, BW=19.1MiB/s (20.0MB/s)(62.4MiB/3273msec) 00:11:40.284 slat (usec): min=11, max=8785, avg=15.39, stdev=90.56 00:11:40.284 clat (usec): min=128, max=2117, avg=188.29, stdev=40.17 00:11:40.284 lat (usec): min=168, max=8989, avg=203.68, stdev=99.57 00:11:40.284 clat percentiles (usec): 00:11:40.284 | 1.00th=[ 165], 5.00th=[ 169], 10.00th=[ 174], 20.00th=[ 176], 00:11:40.284 | 30.00th=[ 180], 40.00th=[ 182], 50.00th=[ 184], 60.00th=[ 188], 00:11:40.284 | 70.00th=[ 190], 80.00th=[ 196], 90.00th=[ 204], 95.00th=[ 215], 00:11:40.284 | 99.00th=[ 258], 99.50th=[ 285], 99.90th=[ 709], 99.95th=[ 1057], 00:11:40.284 | 99.99th=[ 1991] 00:11:40.284 bw ( KiB/s): min=18888, max=20152, per=34.29%, avg=19608.00, stdev=567.58, samples=6 00:11:40.284 iops : min= 4722, max= 5038, avg=4902.00, stdev=141.90, samples=6 00:11:40.284 lat (usec) : 250=98.83%, 500=0.95%, 750=0.14%, 1000=0.02% 00:11:40.285 lat (msec) : 2=0.05%, 4=0.01% 00:11:40.285 cpu : usr=1.38%, sys=5.96%, ctx=15972, majf=0, minf=2 00:11:40.285 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:40.285 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:40.285 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:40.285 issued rwts: total=15965,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:40.285 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:40.285 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=70045: Wed Nov 20 11:24:25 2024 00:11:40.285 read: IOPS=3220, BW=12.6MiB/s (13.2MB/s)(37.4MiB/2973msec) 00:11:40.285 slat (nsec): min=10477, max=88573, avg=14231.79, stdev=5112.93 00:11:40.285 clat (usec): min=165, max=7487, avg=294.76, stdev=129.72 00:11:40.285 lat (usec): min=197, max=7531, avg=308.99, stdev=130.41 00:11:40.285 clat percentiles (usec): 00:11:40.285 | 1.00th=[ 260], 5.00th=[ 269], 10.00th=[ 273], 20.00th=[ 277], 00:11:40.285 | 30.00th=[ 281], 40.00th=[ 285], 50.00th=[ 285], 60.00th=[ 289], 00:11:40.285 | 70.00th=[ 293], 80.00th=[ 302], 90.00th=[ 314], 95.00th=[ 343], 00:11:40.285 | 99.00th=[ 396], 99.50th=[ 408], 99.90th=[ 1401], 99.95th=[ 3589], 00:11:40.285 | 99.99th=[ 7504] 00:11:40.285 bw ( KiB/s): min=12095, max=13368, per=22.46%, avg=12843.00, stdev=481.81, samples=5 00:11:40.285 iops : min= 3023, max= 3342, avg=3210.60, stdev=120.74, samples=5 00:11:40.285 lat (usec) : 250=0.28%, 500=99.53%, 750=0.05%, 1000=0.02% 00:11:40.285 lat (msec) : 2=0.03%, 4=0.05%, 10=0.02% 00:11:40.285 cpu : usr=0.84%, sys=4.27%, ctx=9581, majf=0, minf=2 00:11:40.285 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:40.285 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:40.285 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:40.285 issued rwts: total=9574,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:40.285 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:40.285 00:11:40.285 Run status group 0 (all jobs): 00:11:40.285 READ: bw=55.8MiB/s (58.6MB/s), 12.6MiB/s-19.1MiB/s (13.2MB/s-20.0MB/s), io=220MiB (231MB), run=2973-3938msec 00:11:40.285 00:11:40.285 Disk stats (read/write): 00:11:40.285 nvme0n1: ios=16216/0, merge=0/0, ticks=3048/0, in_queue=3048, util=94.99% 00:11:40.285 nvme0n2: ios=13332/0, merge=0/0, ticks=3581/0, in_queue=3581, util=95.59% 00:11:40.285 nvme0n3: ios=15161/0, merge=0/0, ticks=2885/0, in_queue=2885, util=96.33% 00:11:40.285 nvme0n4: ios=9228/0, merge=0/0, ticks=2697/0, in_queue=2697, util=96.46% 00:11:40.285 11:24:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:40.285 11:24:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:40.851 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:40.851 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:41.109 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:41.109 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:41.369 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:41.369 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:41.635 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:41.635 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:41.893 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:41.893 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 70002 00:11:41.893 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:41.893 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:41.893 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.893 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:41.893 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:11:41.893 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:41.893 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:41.893 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:41.893 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:41.893 nvmf hotplug test: fio failed as expected 00:11:41.893 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:11:41.893 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:41.893 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:41.893 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:42.152 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:42.152 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:42.152 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:42.152 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:42.152 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:42.152 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:42.152 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:11:42.152 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:42.152 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:11:42.152 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:42.152 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:42.152 rmmod nvme_tcp 00:11:42.152 rmmod nvme_fabrics 00:11:42.152 rmmod nvme_keyring 00:11:42.152 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:42.152 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:11:42.152 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:11:42.152 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 69505 ']' 00:11:42.152 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 69505 00:11:42.152 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 69505 ']' 00:11:42.152 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 69505 00:11:42.152 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:11:42.152 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:42.152 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69505 00:11:42.152 killing process with pid 69505 00:11:42.152 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:42.152 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:42.152 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69505' 00:11:42.152 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 69505 00:11:42.152 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 69505 00:11:42.412 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:42.412 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:42.412 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:42.412 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:11:42.412 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:11:42.412 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:42.412 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:11:42.412 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:42.412 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:42.412 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:42.412 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:42.412 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:42.412 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:42.412 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:42.412 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:42.412 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:42.412 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:42.412 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:42.671 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:42.671 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:42.671 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:42.671 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:42.671 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:42.671 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:42.671 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:42.671 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:42.671 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:11:42.671 00:11:42.671 real 0m20.583s 00:11:42.671 user 1m19.500s 00:11:42.671 sys 0m8.452s 00:11:42.671 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:42.671 ************************************ 00:11:42.671 END TEST nvmf_fio_target 00:11:42.671 ************************************ 00:11:42.671 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.671 11:24:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:42.671 11:24:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:42.671 11:24:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:42.671 11:24:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:42.671 ************************************ 00:11:42.671 START TEST nvmf_bdevio 00:11:42.671 ************************************ 00:11:42.672 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:42.672 * Looking for test storage... 00:11:42.672 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:42.672 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:42.672 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:11:42.672 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:42.931 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:42.931 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:42.931 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:42.931 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:42.931 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:11:42.931 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:11:42.931 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:11:42.931 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:11:42.931 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:11:42.931 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:11:42.931 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:11:42.931 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:42.931 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:11:42.931 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:11:42.931 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:42.931 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:42.931 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:11:42.931 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:11:42.931 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:42.931 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:11:42.931 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:11:42.931 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:11:42.931 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:11:42.931 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:42.931 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:11:42.931 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:11:42.931 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:42.931 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:42.931 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:11:42.931 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:42.931 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:42.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.931 --rc genhtml_branch_coverage=1 00:11:42.931 --rc genhtml_function_coverage=1 00:11:42.931 --rc genhtml_legend=1 00:11:42.931 --rc geninfo_all_blocks=1 00:11:42.931 --rc geninfo_unexecuted_blocks=1 00:11:42.931 00:11:42.931 ' 00:11:42.931 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:42.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.931 --rc genhtml_branch_coverage=1 00:11:42.931 --rc genhtml_function_coverage=1 00:11:42.931 --rc genhtml_legend=1 00:11:42.931 --rc geninfo_all_blocks=1 00:11:42.931 --rc geninfo_unexecuted_blocks=1 00:11:42.931 00:11:42.931 ' 00:11:42.931 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:42.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.931 --rc genhtml_branch_coverage=1 00:11:42.931 --rc genhtml_function_coverage=1 00:11:42.931 --rc genhtml_legend=1 00:11:42.931 --rc geninfo_all_blocks=1 00:11:42.931 --rc geninfo_unexecuted_blocks=1 00:11:42.931 00:11:42.931 ' 00:11:42.931 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:42.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.931 --rc genhtml_branch_coverage=1 00:11:42.931 --rc genhtml_function_coverage=1 00:11:42.931 --rc genhtml_legend=1 00:11:42.931 --rc geninfo_all_blocks=1 00:11:42.931 --rc geninfo_unexecuted_blocks=1 00:11:42.931 00:11:42.931 ' 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=806d442c-08ba-4719-b76d-33169283459a 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:42.932 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:42.932 Cannot find device "nvmf_init_br" 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:42.932 Cannot find device "nvmf_init_br2" 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:42.932 Cannot find device "nvmf_tgt_br" 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:42.932 Cannot find device "nvmf_tgt_br2" 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:42.932 Cannot find device "nvmf_init_br" 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:42.932 Cannot find device "nvmf_init_br2" 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:42.932 Cannot find device "nvmf_tgt_br" 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:42.932 Cannot find device "nvmf_tgt_br2" 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:42.932 Cannot find device "nvmf_br" 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:42.932 Cannot find device "nvmf_init_if" 00:11:42.932 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:11:42.933 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:42.933 Cannot find device "nvmf_init_if2" 00:11:42.933 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:11:42.933 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:42.933 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:42.933 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:11:42.933 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:42.933 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:42.933 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:11:42.933 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:42.933 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:42.933 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:42.933 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:42.933 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:42.933 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:43.191 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:43.191 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:43.191 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:43.191 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:43.191 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:43.191 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:43.191 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:43.192 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:43.192 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:43.192 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:43.192 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:43.192 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:43.192 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:43.192 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:43.192 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:43.192 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:43.192 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:43.192 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:43.192 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:43.192 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:43.192 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:43.192 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:43.192 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:43.192 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:43.192 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:43.192 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:43.192 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:43.192 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:43.192 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:11:43.192 00:11:43.192 --- 10.0.0.3 ping statistics --- 00:11:43.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.192 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:11:43.192 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:43.192 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:43.192 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:11:43.192 00:11:43.192 --- 10.0.0.4 ping statistics --- 00:11:43.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.192 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:11:43.192 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:43.192 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:43.192 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:11:43.192 00:11:43.192 --- 10.0.0.1 ping statistics --- 00:11:43.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.192 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:11:43.192 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:43.192 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:43.192 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:11:43.192 00:11:43.192 --- 10.0.0.2 ping statistics --- 00:11:43.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.192 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:11:43.192 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:43.192 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:11:43.192 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:43.192 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:43.192 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:43.192 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:43.192 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:43.192 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:43.192 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:43.192 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:43.192 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:43.192 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:43.192 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:43.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:43.192 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=70438 00:11:43.192 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 70438 00:11:43.192 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:43.192 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 70438 ']' 00:11:43.192 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:43.192 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:43.192 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:43.192 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:43.192 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:43.451 [2024-11-20 11:24:28.784382] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:11:43.451 [2024-11-20 11:24:28.784714] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:43.451 [2024-11-20 11:24:28.934522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:43.451 [2024-11-20 11:24:28.970946] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:43.451 [2024-11-20 11:24:28.971405] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:43.451 [2024-11-20 11:24:28.971678] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:43.451 [2024-11-20 11:24:28.971913] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:43.451 [2024-11-20 11:24:28.972179] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:43.451 [2024-11-20 11:24:28.973313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:43.451 [2024-11-20 11:24:28.973464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:11:43.451 [2024-11-20 11:24:28.973570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:43.451 [2024-11-20 11:24:28.973570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:11:43.710 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:43.710 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:11:43.710 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:43.710 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:43.710 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:43.710 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:43.710 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:43.710 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.710 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:43.710 [2024-11-20 11:24:29.103669] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:43.710 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.710 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:43.710 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.710 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:43.710 Malloc0 00:11:43.710 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.710 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:43.710 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.710 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:43.710 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.710 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:43.710 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.710 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:43.710 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.710 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:43.710 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.710 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:43.710 [2024-11-20 11:24:29.161998] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:43.710 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.710 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:43.710 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:43.710 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:11:43.710 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:11:43.710 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:43.710 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:43.710 { 00:11:43.710 "params": { 00:11:43.710 "name": "Nvme$subsystem", 00:11:43.710 "trtype": "$TEST_TRANSPORT", 00:11:43.710 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:43.710 "adrfam": "ipv4", 00:11:43.710 "trsvcid": "$NVMF_PORT", 00:11:43.710 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:43.710 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:43.710 "hdgst": ${hdgst:-false}, 00:11:43.710 "ddgst": ${ddgst:-false} 00:11:43.710 }, 00:11:43.710 "method": "bdev_nvme_attach_controller" 00:11:43.710 } 00:11:43.710 EOF 00:11:43.710 )") 00:11:43.710 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:11:43.710 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:11:43.710 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:11:43.710 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:43.710 "params": { 00:11:43.710 "name": "Nvme1", 00:11:43.710 "trtype": "tcp", 00:11:43.710 "traddr": "10.0.0.3", 00:11:43.710 "adrfam": "ipv4", 00:11:43.710 "trsvcid": "4420", 00:11:43.710 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:43.710 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:43.710 "hdgst": false, 00:11:43.710 "ddgst": false 00:11:43.710 }, 00:11:43.710 "method": "bdev_nvme_attach_controller" 00:11:43.710 }' 00:11:43.710 [2024-11-20 11:24:29.221653] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:11:43.710 [2024-11-20 11:24:29.221758] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70474 ] 00:11:43.968 [2024-11-20 11:24:29.379814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:43.968 [2024-11-20 11:24:29.423255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:43.968 [2024-11-20 11:24:29.423397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:43.968 [2024-11-20 11:24:29.423633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.226 I/O targets: 00:11:44.226 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:44.226 00:11:44.226 00:11:44.226 CUnit - A unit testing framework for C - Version 2.1-3 00:11:44.226 http://cunit.sourceforge.net/ 00:11:44.226 00:11:44.226 00:11:44.226 Suite: bdevio tests on: Nvme1n1 00:11:44.226 Test: blockdev write read block ...passed 00:11:44.226 Test: blockdev write zeroes read block ...passed 00:11:44.226 Test: blockdev write zeroes read no split ...passed 00:11:44.226 Test: blockdev write zeroes read split ...passed 00:11:44.226 Test: blockdev write zeroes read split partial ...passed 00:11:44.226 Test: blockdev reset ...[2024-11-20 11:24:29.684368] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:11:44.226 [2024-11-20 11:24:29.684486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fc3f50 (9): Bad file descriptor 00:11:44.226 [2024-11-20 11:24:29.697489] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:11:44.226 passed 00:11:44.226 Test: blockdev write read 8 blocks ...passed 00:11:44.226 Test: blockdev write read size > 128k ...passed 00:11:44.226 Test: blockdev write read invalid size ...passed 00:11:44.226 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:44.226 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:44.227 Test: blockdev write read max offset ...passed 00:11:44.485 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:44.485 Test: blockdev writev readv 8 blocks ...passed 00:11:44.485 Test: blockdev writev readv 30 x 1block ...passed 00:11:44.485 Test: blockdev writev readv block ...passed 00:11:44.485 Test: blockdev writev readv size > 128k ...passed 00:11:44.485 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:44.485 Test: blockdev comparev and writev ...[2024-11-20 11:24:29.871739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:44.485 [2024-11-20 11:24:29.871799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:44.485 [2024-11-20 11:24:29.871820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:44.485 [2024-11-20 11:24:29.871832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:44.485 [2024-11-20 11:24:29.872259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:44.485 [2024-11-20 11:24:29.872301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:44.485 [2024-11-20 11:24:29.872332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:44.485 [2024-11-20 11:24:29.872353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:44.485 [2024-11-20 11:24:29.873004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:44.485 [2024-11-20 11:24:29.873042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:44.485 [2024-11-20 11:24:29.873062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:44.485 [2024-11-20 11:24:29.873073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:44.485 [2024-11-20 11:24:29.873485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:44.485 [2024-11-20 11:24:29.873520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:44.485 [2024-11-20 11:24:29.873540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:44.485 [2024-11-20 11:24:29.873551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:44.485 passed 00:11:44.485 Test: blockdev nvme passthru rw ...passed 00:11:44.485 Test: blockdev nvme passthru vendor specific ...[2024-11-20 11:24:29.958140] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:44.485 [2024-11-20 11:24:29.958191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:44.485 [2024-11-20 11:24:29.958596] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:44.485 [2024-11-20 11:24:29.958644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:44.485 [2024-11-20 11:24:29.958957] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:44.485 [2024-11-20 11:24:29.958989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:44.485 [2024-11-20 11:24:29.959115] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:44.485 [2024-11-20 11:24:29.959250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:44.485 passed 00:11:44.485 Test: blockdev nvme admin passthru ...passed 00:11:44.485 Test: blockdev copy ...passed 00:11:44.485 00:11:44.485 Run Summary: Type Total Ran Passed Failed Inactive 00:11:44.485 suites 1 1 n/a 0 0 00:11:44.485 tests 23 23 23 0 0 00:11:44.485 asserts 152 152 152 0 n/a 00:11:44.485 00:11:44.485 Elapsed time = 0.890 seconds 00:11:44.743 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:44.743 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.743 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:44.743 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.744 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:44.744 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:44.744 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:44.744 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:11:44.744 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:44.744 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:11:44.744 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:44.744 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:44.744 rmmod nvme_tcp 00:11:44.744 rmmod nvme_fabrics 00:11:44.744 rmmod nvme_keyring 00:11:44.744 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:44.744 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:11:44.744 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:11:44.744 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 70438 ']' 00:11:44.744 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 70438 00:11:44.744 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 70438 ']' 00:11:44.744 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 70438 00:11:44.744 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:11:44.744 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:44.744 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70438 00:11:44.744 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:11:44.744 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:11:44.744 killing process with pid 70438 00:11:44.744 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70438' 00:11:44.744 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 70438 00:11:44.744 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 70438 00:11:45.003 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:45.003 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:45.003 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:45.003 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:11:45.003 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:11:45.003 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:45.003 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:11:45.003 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:45.003 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:45.003 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:45.003 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:45.003 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:45.003 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:45.003 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:45.003 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:45.003 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:45.003 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:45.003 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:45.003 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:45.262 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:45.262 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:45.262 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:45.262 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:45.262 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:45.262 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:45.262 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:45.262 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:11:45.262 00:11:45.262 real 0m2.532s 00:11:45.262 user 0m7.792s 00:11:45.262 sys 0m0.741s 00:11:45.262 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:45.262 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:45.262 ************************************ 00:11:45.262 END TEST nvmf_bdevio 00:11:45.262 ************************************ 00:11:45.262 11:24:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:45.262 00:11:45.262 real 3m34.451s 00:11:45.263 user 11m14.118s 00:11:45.263 sys 0m59.846s 00:11:45.263 11:24:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:45.263 11:24:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:45.263 ************************************ 00:11:45.263 END TEST nvmf_target_core 00:11:45.263 ************************************ 00:11:45.263 11:24:30 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:45.263 11:24:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:45.263 11:24:30 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:45.263 11:24:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:45.263 ************************************ 00:11:45.263 START TEST nvmf_target_extra 00:11:45.263 ************************************ 00:11:45.263 11:24:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:45.522 * Looking for test storage... 00:11:45.522 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:11:45.522 11:24:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:45.522 11:24:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:11:45.522 11:24:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:45.522 11:24:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:45.522 11:24:30 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:45.523 11:24:30 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:45.523 11:24:30 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:45.523 11:24:30 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:45.523 11:24:30 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:45.523 11:24:30 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:45.523 11:24:30 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:45.523 11:24:30 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:45.523 11:24:30 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:45.523 11:24:30 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:45.523 11:24:30 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:45.523 11:24:30 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:45.523 11:24:30 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:45.523 11:24:30 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:45.523 11:24:30 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:45.523 11:24:30 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:45.523 11:24:30 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:45.523 11:24:30 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:45.523 11:24:30 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:45.523 11:24:30 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:45.523 11:24:30 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:45.523 11:24:30 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:45.523 11:24:30 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:45.523 11:24:30 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:45.523 11:24:30 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:45.523 11:24:30 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:45.523 11:24:30 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:45.523 11:24:30 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:45.523 11:24:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:45.523 11:24:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:45.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.523 --rc genhtml_branch_coverage=1 00:11:45.523 --rc genhtml_function_coverage=1 00:11:45.523 --rc genhtml_legend=1 00:11:45.523 --rc geninfo_all_blocks=1 00:11:45.523 --rc geninfo_unexecuted_blocks=1 00:11:45.523 00:11:45.523 ' 00:11:45.523 11:24:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:45.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.523 --rc genhtml_branch_coverage=1 00:11:45.523 --rc genhtml_function_coverage=1 00:11:45.523 --rc genhtml_legend=1 00:11:45.523 --rc geninfo_all_blocks=1 00:11:45.523 --rc geninfo_unexecuted_blocks=1 00:11:45.523 00:11:45.523 ' 00:11:45.523 11:24:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:45.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.523 --rc genhtml_branch_coverage=1 00:11:45.523 --rc genhtml_function_coverage=1 00:11:45.523 --rc genhtml_legend=1 00:11:45.523 --rc geninfo_all_blocks=1 00:11:45.523 --rc geninfo_unexecuted_blocks=1 00:11:45.523 00:11:45.523 ' 00:11:45.523 11:24:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:45.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.523 --rc genhtml_branch_coverage=1 00:11:45.523 --rc genhtml_function_coverage=1 00:11:45.523 --rc genhtml_legend=1 00:11:45.523 --rc geninfo_all_blocks=1 00:11:45.523 --rc geninfo_unexecuted_blocks=1 00:11:45.523 00:11:45.523 ' 00:11:45.523 11:24:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:45.523 11:24:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:45.523 11:24:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:45.523 11:24:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:45.523 11:24:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:45.523 11:24:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:45.523 11:24:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:45.523 11:24:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:45.523 11:24:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:45.523 11:24:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:45.523 11:24:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:45.523 11:24:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:45.523 11:24:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:11:45.523 11:24:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=806d442c-08ba-4719-b76d-33169283459a 00:11:45.523 11:24:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:45.523 11:24:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:45.523 11:24:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:45.523 11:24:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:45.523 11:24:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:45.523 11:24:30 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:45.523 11:24:30 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:45.523 11:24:30 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:45.523 11:24:30 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:45.523 11:24:30 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.523 11:24:30 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.523 11:24:30 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.523 11:24:30 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:45.523 11:24:30 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.523 11:24:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:45.523 11:24:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:45.523 11:24:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:45.523 11:24:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:45.523 11:24:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:45.523 11:24:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:45.523 11:24:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:45.523 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:45.523 11:24:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:45.523 11:24:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:45.523 11:24:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:45.523 11:24:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:45.523 11:24:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:45.523 11:24:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:45.523 11:24:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:45.523 11:24:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:45.523 11:24:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:45.523 11:24:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:45.523 ************************************ 00:11:45.523 START TEST nvmf_example 00:11:45.523 ************************************ 00:11:45.523 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:45.523 * Looking for test storage... 00:11:45.523 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:45.523 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:45.783 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:11:45.783 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:45.783 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:45.783 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:45.783 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:45.783 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:45.783 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:11:45.783 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:11:45.783 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:11:45.783 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:11:45.783 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:11:45.783 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:11:45.783 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:11:45.783 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:45.783 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:11:45.783 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:11:45.783 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:45.783 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:45.783 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:11:45.783 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:11:45.784 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:45.784 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:11:45.784 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:11:45.784 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:11:45.784 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:11:45.784 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:45.784 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:11:45.784 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:11:45.784 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:45.784 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:45.784 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:11:45.784 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:45.784 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:45.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.784 --rc genhtml_branch_coverage=1 00:11:45.784 --rc genhtml_function_coverage=1 00:11:45.784 --rc genhtml_legend=1 00:11:45.784 --rc geninfo_all_blocks=1 00:11:45.784 --rc geninfo_unexecuted_blocks=1 00:11:45.784 00:11:45.784 ' 00:11:45.784 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:45.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.784 --rc genhtml_branch_coverage=1 00:11:45.784 --rc genhtml_function_coverage=1 00:11:45.784 --rc genhtml_legend=1 00:11:45.784 --rc geninfo_all_blocks=1 00:11:45.784 --rc geninfo_unexecuted_blocks=1 00:11:45.784 00:11:45.784 ' 00:11:45.784 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:45.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.784 --rc genhtml_branch_coverage=1 00:11:45.784 --rc genhtml_function_coverage=1 00:11:45.784 --rc genhtml_legend=1 00:11:45.784 --rc geninfo_all_blocks=1 00:11:45.784 --rc geninfo_unexecuted_blocks=1 00:11:45.784 00:11:45.784 ' 00:11:45.784 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:45.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.784 --rc genhtml_branch_coverage=1 00:11:45.784 --rc genhtml_function_coverage=1 00:11:45.784 --rc genhtml_legend=1 00:11:45.784 --rc geninfo_all_blocks=1 00:11:45.784 --rc geninfo_unexecuted_blocks=1 00:11:45.784 00:11:45.784 ' 00:11:45.784 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:45.784 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:45.784 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:45.784 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:45.784 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:45.784 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:45.784 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:45.784 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:45.784 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:45.784 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:45.784 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:45.784 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:45.784 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:11:45.784 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=806d442c-08ba-4719-b76d-33169283459a 00:11:45.784 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:45.784 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:45.784 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:45.784 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:45.784 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:45.784 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:11:45.784 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:45.784 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:45.784 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:45.784 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.784 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.784 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.784 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:45.784 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.784 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:11:45.784 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:45.784 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:45.784 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:45.784 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:45.784 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:45.784 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:45.784 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:45.784 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:45.784 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:45.784 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:45.784 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:45.784 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:45.784 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:45.784 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:45.784 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:45.784 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:45.784 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:45.784 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:45.784 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:45.784 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:45.784 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:45.784 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:45.784 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:45.784 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:45.784 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:45.784 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:45.784 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:45.784 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:45.784 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:45.784 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:45.784 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:45.784 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:45.784 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:45.785 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:45.785 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:45.785 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:45.785 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:45.785 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:45.785 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:45.785 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:45.785 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:45.785 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:45.785 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:45.785 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:45.785 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:45.785 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:45.785 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:45.785 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:45.785 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:45.785 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:45.785 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:45.785 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:45.785 Cannot find device "nvmf_init_br" 00:11:45.785 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@162 -- # true 00:11:45.785 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:45.785 Cannot find device "nvmf_init_br2" 00:11:45.785 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@163 -- # true 00:11:45.785 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:45.785 Cannot find device "nvmf_tgt_br" 00:11:45.785 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@164 -- # true 00:11:45.785 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:45.785 Cannot find device "nvmf_tgt_br2" 00:11:45.785 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@165 -- # true 00:11:45.785 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:45.785 Cannot find device "nvmf_init_br" 00:11:45.785 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@166 -- # true 00:11:45.785 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:45.785 Cannot find device "nvmf_init_br2" 00:11:45.785 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@167 -- # true 00:11:45.785 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:45.785 Cannot find device "nvmf_tgt_br" 00:11:45.785 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@168 -- # true 00:11:45.785 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:45.785 Cannot find device "nvmf_tgt_br2" 00:11:45.785 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@169 -- # true 00:11:45.785 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:45.785 Cannot find device "nvmf_br" 00:11:45.785 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@170 -- # true 00:11:45.785 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:45.785 Cannot find device "nvmf_init_if" 00:11:45.785 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@171 -- # true 00:11:45.785 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:45.785 Cannot find device "nvmf_init_if2" 00:11:45.785 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@172 -- # true 00:11:45.785 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:45.785 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:45.785 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@173 -- # true 00:11:45.785 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:45.785 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:45.785 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@174 -- # true 00:11:45.785 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:45.785 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:45.785 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:46.043 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:46.043 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:46.043 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:46.043 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:46.043 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:46.043 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:46.043 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:46.043 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:46.043 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:46.043 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:46.043 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:46.043 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:46.044 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:46.044 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:46.044 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:46.044 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:46.044 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:46.044 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:46.044 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:46.044 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:46.044 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:46.044 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:46.044 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:46.044 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:46.044 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:46.044 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:46.044 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:46.044 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:46.044 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:46.044 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:46.044 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:46.044 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.112 ms 00:11:46.044 00:11:46.044 --- 10.0.0.3 ping statistics --- 00:11:46.044 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:46.044 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:11:46.044 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:46.044 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:46.044 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 00:11:46.044 00:11:46.044 --- 10.0.0.4 ping statistics --- 00:11:46.044 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:46.044 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:11:46.044 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:46.044 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:46.044 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:11:46.044 00:11:46.044 --- 10.0.0.1 ping statistics --- 00:11:46.044 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:46.044 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:11:46.044 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:46.316 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:46.316 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:11:46.316 00:11:46.316 --- 10.0.0.2 ping statistics --- 00:11:46.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:46.316 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:11:46.316 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:46.316 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@461 -- # return 0 00:11:46.316 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:46.316 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:46.316 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:46.316 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:46.316 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:46.316 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:46.316 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:46.316 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:46.316 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:46.316 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:46.316 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:46.316 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:46.316 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:46.316 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=70772 00:11:46.316 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:46.316 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:46.316 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 70772 00:11:46.316 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 70772 ']' 00:11:46.316 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:46.316 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:46.316 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:46.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:46.316 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:46.316 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:47.259 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:47.259 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:11:47.259 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:47.259 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:47.259 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:47.259 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:47.259 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.259 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:47.564 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.564 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:47.564 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.564 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:47.564 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.564 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:47.564 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:47.564 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.564 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:47.564 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.564 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:47.564 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:47.564 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.564 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:47.564 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.564 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:47.564 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.564 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:47.564 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.564 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:11:47.564 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:59.768 Initializing NVMe Controllers 00:11:59.768 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:11:59.768 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:59.768 Initialization complete. Launching workers. 00:11:59.768 ======================================================== 00:11:59.768 Latency(us) 00:11:59.768 Device Information : IOPS MiB/s Average min max 00:11:59.768 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15035.10 58.73 4258.44 692.46 28150.88 00:11:59.768 ======================================================== 00:11:59.768 Total : 15035.10 58.73 4258.44 692.46 28150.88 00:11:59.768 00:11:59.768 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:59.768 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:59.768 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:59.768 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:59.768 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:59.768 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:59.768 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:59.768 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:59.768 rmmod nvme_tcp 00:11:59.768 rmmod nvme_fabrics 00:11:59.768 rmmod nvme_keyring 00:11:59.768 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:59.768 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:59.768 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:59.768 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 70772 ']' 00:11:59.768 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 70772 00:11:59.768 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 70772 ']' 00:11:59.768 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 70772 00:11:59.768 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:11:59.768 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:59.768 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70772 00:11:59.768 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:11:59.768 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:11:59.768 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70772' 00:11:59.768 killing process with pid 70772 00:11:59.768 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 70772 00:11:59.768 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 70772 00:11:59.768 nvmf threads initialize successfully 00:11:59.768 bdev subsystem init successfully 00:11:59.768 created a nvmf target service 00:11:59.768 create targets's poll groups done 00:11:59.768 all subsystems of target started 00:11:59.768 nvmf target is running 00:11:59.768 all subsystems of target stopped 00:11:59.768 destroy targets's poll groups done 00:11:59.768 destroyed the nvmf target service 00:11:59.768 bdev subsystem finish successfully 00:11:59.768 nvmf threads destroy successfully 00:11:59.768 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:59.768 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:59.768 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:59.768 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:11:59.768 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:11:59.768 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:59.768 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:11:59.768 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:59.768 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:59.768 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:59.768 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:59.768 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:59.768 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:59.768 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:59.768 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:59.768 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:59.768 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:59.768 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:59.768 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:59.768 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:59.768 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:59.768 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:59.768 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:59.768 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:59.768 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:59.768 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:59.768 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@300 -- # return 0 00:11:59.768 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:59.768 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:59.768 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:59.768 00:11:59.768 real 0m12.726s 00:11:59.768 user 0m44.885s 00:11:59.768 sys 0m1.961s 00:11:59.768 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:59.768 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:59.768 ************************************ 00:11:59.768 END TEST nvmf_example 00:11:59.768 ************************************ 00:11:59.768 11:24:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:59.768 11:24:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:59.768 11:24:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:59.768 11:24:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:59.768 ************************************ 00:11:59.768 START TEST nvmf_filesystem 00:11:59.768 ************************************ 00:11:59.768 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:59.768 * Looking for test storage... 00:11:59.768 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:59.768 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:59.768 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:11:59.768 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:59.768 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:59.768 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:59.768 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:59.768 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:59.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.769 --rc genhtml_branch_coverage=1 00:11:59.769 --rc genhtml_function_coverage=1 00:11:59.769 --rc genhtml_legend=1 00:11:59.769 --rc geninfo_all_blocks=1 00:11:59.769 --rc geninfo_unexecuted_blocks=1 00:11:59.769 00:11:59.769 ' 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:59.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.769 --rc genhtml_branch_coverage=1 00:11:59.769 --rc genhtml_function_coverage=1 00:11:59.769 --rc genhtml_legend=1 00:11:59.769 --rc geninfo_all_blocks=1 00:11:59.769 --rc geninfo_unexecuted_blocks=1 00:11:59.769 00:11:59.769 ' 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:59.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.769 --rc genhtml_branch_coverage=1 00:11:59.769 --rc genhtml_function_coverage=1 00:11:59.769 --rc genhtml_legend=1 00:11:59.769 --rc geninfo_all_blocks=1 00:11:59.769 --rc geninfo_unexecuted_blocks=1 00:11:59.769 00:11:59.769 ' 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:59.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.769 --rc genhtml_branch_coverage=1 00:11:59.769 --rc genhtml_function_coverage=1 00:11:59.769 --rc genhtml_legend=1 00:11:59.769 --rc geninfo_all_blocks=1 00:11:59.769 --rc geninfo_unexecuted_blocks=1 00:11:59.769 00:11:59.769 ' 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=y 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=y 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:11:59.769 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:59.769 #define SPDK_CONFIG_H 00:11:59.769 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:59.769 #define SPDK_CONFIG_APPS 1 00:11:59.769 #define SPDK_CONFIG_ARCH native 00:11:59.769 #undef SPDK_CONFIG_ASAN 00:11:59.769 #define SPDK_CONFIG_AVAHI 1 00:11:59.769 #undef SPDK_CONFIG_CET 00:11:59.769 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:59.769 #define SPDK_CONFIG_COVERAGE 1 00:11:59.769 #define SPDK_CONFIG_CROSS_PREFIX 00:11:59.769 #undef SPDK_CONFIG_CRYPTO 00:11:59.769 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:59.769 #undef SPDK_CONFIG_CUSTOMOCF 00:11:59.769 #undef SPDK_CONFIG_DAOS 00:11:59.769 #define SPDK_CONFIG_DAOS_DIR 00:11:59.769 #define SPDK_CONFIG_DEBUG 1 00:11:59.769 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:59.769 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:11:59.769 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:59.769 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:59.769 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:59.769 #undef SPDK_CONFIG_DPDK_UADK 00:11:59.769 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:11:59.769 #define SPDK_CONFIG_EXAMPLES 1 00:11:59.769 #undef SPDK_CONFIG_FC 00:11:59.769 #define SPDK_CONFIG_FC_PATH 00:11:59.769 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:59.769 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:59.769 #define SPDK_CONFIG_FSDEV 1 00:11:59.769 #undef SPDK_CONFIG_FUSE 00:11:59.769 #undef SPDK_CONFIG_FUZZER 00:11:59.769 #define SPDK_CONFIG_FUZZER_LIB 00:11:59.769 #define SPDK_CONFIG_GOLANG 1 00:11:59.769 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:59.769 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:59.769 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:59.769 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:59.769 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:59.769 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:59.769 #undef SPDK_CONFIG_HAVE_LZ4 00:11:59.769 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:59.769 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:59.769 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:59.769 #define SPDK_CONFIG_IDXD 1 00:11:59.769 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:59.769 #undef SPDK_CONFIG_IPSEC_MB 00:11:59.769 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:59.769 #define SPDK_CONFIG_ISAL 1 00:11:59.769 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:59.769 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:59.769 #define SPDK_CONFIG_LIBDIR 00:11:59.769 #undef SPDK_CONFIG_LTO 00:11:59.769 #define SPDK_CONFIG_MAX_LCORES 128 00:11:59.769 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:11:59.769 #define SPDK_CONFIG_NVME_CUSE 1 00:11:59.769 #undef SPDK_CONFIG_OCF 00:11:59.769 #define SPDK_CONFIG_OCF_PATH 00:11:59.769 #define SPDK_CONFIG_OPENSSL_PATH 00:11:59.769 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:59.769 #define SPDK_CONFIG_PGO_DIR 00:11:59.769 #undef SPDK_CONFIG_PGO_USE 00:11:59.769 #define SPDK_CONFIG_PREFIX /usr/local 00:11:59.769 #undef SPDK_CONFIG_RAID5F 00:11:59.769 #undef SPDK_CONFIG_RBD 00:11:59.769 #define SPDK_CONFIG_RDMA 1 00:11:59.769 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:59.769 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:59.769 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:59.769 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:59.769 #define SPDK_CONFIG_SHARED 1 00:11:59.769 #undef SPDK_CONFIG_SMA 00:11:59.769 #define SPDK_CONFIG_TESTS 1 00:11:59.769 #undef SPDK_CONFIG_TSAN 00:11:59.769 #define SPDK_CONFIG_UBLK 1 00:11:59.769 #define SPDK_CONFIG_UBSAN 1 00:11:59.769 #undef SPDK_CONFIG_UNIT_TESTS 00:11:59.769 #undef SPDK_CONFIG_URING 00:11:59.769 #define SPDK_CONFIG_URING_PATH 00:11:59.769 #undef SPDK_CONFIG_URING_ZNS 00:11:59.769 #define SPDK_CONFIG_USDT 1 00:11:59.769 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:59.769 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:59.769 #undef SPDK_CONFIG_VFIO_USER 00:11:59.769 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:59.769 #define SPDK_CONFIG_VHOST 1 00:11:59.769 #define SPDK_CONFIG_VIRTIO 1 00:11:59.769 #undef SPDK_CONFIG_VTUNE 00:11:59.769 #define SPDK_CONFIG_VTUNE_DIR 00:11:59.770 #define SPDK_CONFIG_WERROR 1 00:11:59.770 #define SPDK_CONFIG_WPDK_DIR 00:11:59.770 #undef SPDK_CONFIG_XNVME 00:11:59.770 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:59.770 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:59.770 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:59.770 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 0 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 1 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 1 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 1 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:11:59.770 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 71046 ]] 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 71046 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.yafVip 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.yafVip/tests/target /tmp/spdk.yafVip 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=13981294592 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5587443712 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6256398336 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266429440 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=2486431744 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506571776 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=20140032 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=13981294592 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5587443712 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6266290176 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266429440 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=139264 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=1253273600 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253285888 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora39-libvirt/output 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=94894858240 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=4807921664 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:11:59.771 * Looking for test storage... 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/home 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=13981294592 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:59.771 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:59.771 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:59.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.772 --rc genhtml_branch_coverage=1 00:11:59.772 --rc genhtml_function_coverage=1 00:11:59.772 --rc genhtml_legend=1 00:11:59.772 --rc geninfo_all_blocks=1 00:11:59.772 --rc geninfo_unexecuted_blocks=1 00:11:59.772 00:11:59.772 ' 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:59.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.772 --rc genhtml_branch_coverage=1 00:11:59.772 --rc genhtml_function_coverage=1 00:11:59.772 --rc genhtml_legend=1 00:11:59.772 --rc geninfo_all_blocks=1 00:11:59.772 --rc geninfo_unexecuted_blocks=1 00:11:59.772 00:11:59.772 ' 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:59.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.772 --rc genhtml_branch_coverage=1 00:11:59.772 --rc genhtml_function_coverage=1 00:11:59.772 --rc genhtml_legend=1 00:11:59.772 --rc geninfo_all_blocks=1 00:11:59.772 --rc geninfo_unexecuted_blocks=1 00:11:59.772 00:11:59.772 ' 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:59.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.772 --rc genhtml_branch_coverage=1 00:11:59.772 --rc genhtml_function_coverage=1 00:11:59.772 --rc genhtml_legend=1 00:11:59.772 --rc geninfo_all_blocks=1 00:11:59.772 --rc geninfo_unexecuted_blocks=1 00:11:59.772 00:11:59.772 ' 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=806d442c-08ba-4719-b76d-33169283459a 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:59.772 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:59.772 Cannot find device "nvmf_init_br" 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@162 -- # true 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:59.772 Cannot find device "nvmf_init_br2" 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@163 -- # true 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:59.772 Cannot find device "nvmf_tgt_br" 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@164 -- # true 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:59.772 Cannot find device "nvmf_tgt_br2" 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@165 -- # true 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:59.772 Cannot find device "nvmf_init_br" 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@166 -- # true 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:59.772 Cannot find device "nvmf_init_br2" 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@167 -- # true 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:59.772 Cannot find device "nvmf_tgt_br" 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@168 -- # true 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:59.772 Cannot find device "nvmf_tgt_br2" 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@169 -- # true 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:59.772 Cannot find device "nvmf_br" 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@170 -- # true 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:59.772 Cannot find device "nvmf_init_if" 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@171 -- # true 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:59.772 Cannot find device "nvmf_init_if2" 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@172 -- # true 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:59.772 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@173 -- # true 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:59.772 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@174 -- # true 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:59.772 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:59.772 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:59.772 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:11:59.772 00:11:59.772 --- 10.0.0.3 ping statistics --- 00:11:59.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:59.773 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:11:59.773 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:59.773 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:59.773 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:11:59.773 00:11:59.773 --- 10.0.0.4 ping statistics --- 00:11:59.773 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:59.773 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:11:59.773 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:59.773 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:59.773 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:11:59.773 00:11:59.773 --- 10.0.0.1 ping statistics --- 00:11:59.773 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:59.773 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:11:59.773 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:59.773 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:59.773 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:11:59.773 00:11:59.773 --- 10.0.0.2 ping statistics --- 00:11:59.773 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:59.773 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:11:59.773 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:59.773 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@461 -- # return 0 00:11:59.773 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:59.773 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:59.773 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:59.773 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:59.773 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:59.773 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:59.773 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:59.773 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:59.773 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:59.773 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:59.773 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:59.773 ************************************ 00:11:59.773 START TEST nvmf_filesystem_no_in_capsule 00:11:59.773 ************************************ 00:11:59.773 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:11:59.773 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:59.773 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:59.773 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:59.773 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:59.773 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:59.773 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:59.773 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=71234 00:11:59.773 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 71234 00:11:59.773 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 71234 ']' 00:11:59.773 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:59.773 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:59.773 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:59.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:59.773 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:59.773 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:59.773 [2024-11-20 11:24:44.791209] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:11:59.773 [2024-11-20 11:24:44.791304] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:59.773 [2024-11-20 11:24:44.950553] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:59.773 [2024-11-20 11:24:44.984373] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:59.773 [2024-11-20 11:24:44.984436] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:59.773 [2024-11-20 11:24:44.984447] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:59.773 [2024-11-20 11:24:44.984456] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:59.773 [2024-11-20 11:24:44.984463] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:59.773 [2024-11-20 11:24:44.985292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:59.773 [2024-11-20 11:24:44.985342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:59.773 [2024-11-20 11:24:44.985495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:59.773 [2024-11-20 11:24:44.985511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.339 11:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:00.339 11:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:12:00.339 11:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:00.339 11:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:00.339 11:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.339 11:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:00.339 11:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:00.339 11:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:00.339 11:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.339 11:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.339 [2024-11-20 11:24:45.914199] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:00.339 11:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.339 11:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:00.339 11:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.339 11:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.597 Malloc1 00:12:00.597 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.597 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:00.597 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.597 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.597 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.597 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:00.597 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.597 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.597 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.597 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:00.597 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.598 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.598 [2024-11-20 11:24:46.026469] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:00.598 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.598 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:00.598 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:12:00.598 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:12:00.598 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:12:00.598 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:12:00.598 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:00.598 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.598 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.598 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.598 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:12:00.598 { 00:12:00.598 "aliases": [ 00:12:00.598 "1d4bf4de-1598-45ee-9e1e-3ea27c4e2eab" 00:12:00.598 ], 00:12:00.598 "assigned_rate_limits": { 00:12:00.598 "r_mbytes_per_sec": 0, 00:12:00.598 "rw_ios_per_sec": 0, 00:12:00.598 "rw_mbytes_per_sec": 0, 00:12:00.598 "w_mbytes_per_sec": 0 00:12:00.598 }, 00:12:00.598 "block_size": 512, 00:12:00.598 "claim_type": "exclusive_write", 00:12:00.598 "claimed": true, 00:12:00.598 "driver_specific": {}, 00:12:00.598 "memory_domains": [ 00:12:00.598 { 00:12:00.598 "dma_device_id": "system", 00:12:00.598 "dma_device_type": 1 00:12:00.598 }, 00:12:00.598 { 00:12:00.598 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.598 "dma_device_type": 2 00:12:00.598 } 00:12:00.598 ], 00:12:00.598 "name": "Malloc1", 00:12:00.598 "num_blocks": 1048576, 00:12:00.598 "product_name": "Malloc disk", 00:12:00.598 "supported_io_types": { 00:12:00.598 "abort": true, 00:12:00.598 "compare": false, 00:12:00.598 "compare_and_write": false, 00:12:00.598 "copy": true, 00:12:00.598 "flush": true, 00:12:00.598 "get_zone_info": false, 00:12:00.598 "nvme_admin": false, 00:12:00.598 "nvme_io": false, 00:12:00.598 "nvme_io_md": false, 00:12:00.598 "nvme_iov_md": false, 00:12:00.598 "read": true, 00:12:00.598 "reset": true, 00:12:00.598 "seek_data": false, 00:12:00.598 "seek_hole": false, 00:12:00.598 "unmap": true, 00:12:00.598 "write": true, 00:12:00.598 "write_zeroes": true, 00:12:00.598 "zcopy": true, 00:12:00.598 "zone_append": false, 00:12:00.598 "zone_management": false 00:12:00.598 }, 00:12:00.598 "uuid": "1d4bf4de-1598-45ee-9e1e-3ea27c4e2eab", 00:12:00.598 "zoned": false 00:12:00.598 } 00:12:00.598 ]' 00:12:00.598 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:12:00.598 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:12:00.598 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:12:00.598 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:12:00.598 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:12:00.598 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:12:00.598 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:00.598 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid=806d442c-08ba-4719-b76d-33169283459a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:12:00.963 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:00.963 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:12:00.963 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:00.963 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:00.963 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:12:02.863 11:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:02.863 11:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:02.863 11:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:02.863 11:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:02.863 11:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:02.863 11:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:12:02.863 11:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:02.863 11:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:02.863 11:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:02.863 11:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:02.863 11:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:02.863 11:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:02.863 11:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:02.863 11:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:02.863 11:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:02.863 11:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:02.863 11:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:03.121 11:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:03.121 11:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:04.055 11:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:12:04.055 11:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:04.055 11:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:04.055 11:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:04.055 11:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:04.055 ************************************ 00:12:04.055 START TEST filesystem_ext4 00:12:04.055 ************************************ 00:12:04.055 11:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:04.055 11:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:04.055 11:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:04.055 11:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:04.055 11:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:12:04.055 11:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:04.055 11:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:12:04.055 11:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:12:04.055 11:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:12:04.055 11:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:12:04.055 11:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:04.055 mke2fs 1.47.0 (5-Feb-2023) 00:12:04.055 Discarding device blocks: 0/522240 done 00:12:04.055 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:04.055 Filesystem UUID: f5f9285d-57e9-4dcc-84d4-de032adf4819 00:12:04.055 Superblock backups stored on blocks: 00:12:04.055 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:04.055 00:12:04.055 Allocating group tables: 0/64 done 00:12:04.055 Writing inode tables: 0/64 done 00:12:04.314 Creating journal (8192 blocks): done 00:12:04.314 Writing superblocks and filesystem accounting information: 0/64 done 00:12:04.314 00:12:04.314 11:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:12:04.314 11:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:09.579 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:09.579 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:12:09.579 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:09.579 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:12:09.579 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:09.579 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:09.579 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 71234 00:12:09.579 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:09.579 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:09.838 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:09.838 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:09.838 ************************************ 00:12:09.838 END TEST filesystem_ext4 00:12:09.838 ************************************ 00:12:09.838 00:12:09.838 real 0m5.629s 00:12:09.838 user 0m0.034s 00:12:09.838 sys 0m0.053s 00:12:09.838 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:09.838 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:09.838 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:09.838 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:09.838 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:09.838 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:09.838 ************************************ 00:12:09.838 START TEST filesystem_btrfs 00:12:09.838 ************************************ 00:12:09.839 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:09.839 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:09.839 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:09.839 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:09.839 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:09.839 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:09.839 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:09.839 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:09.839 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:09.839 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:09.839 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:09.839 btrfs-progs v6.8.1 00:12:09.839 See https://btrfs.readthedocs.io for more information. 00:12:09.839 00:12:09.839 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:09.839 NOTE: several default settings have changed in version 5.15, please make sure 00:12:09.839 this does not affect your deployments: 00:12:09.839 - DUP for metadata (-m dup) 00:12:09.839 - enabled no-holes (-O no-holes) 00:12:09.839 - enabled free-space-tree (-R free-space-tree) 00:12:09.839 00:12:09.839 Label: (null) 00:12:09.839 UUID: 55b95b1c-1e21-4514-983a-8c55471b3a18 00:12:09.839 Node size: 16384 00:12:09.839 Sector size: 4096 (CPU page size: 4096) 00:12:09.839 Filesystem size: 510.00MiB 00:12:09.839 Block group profiles: 00:12:09.839 Data: single 8.00MiB 00:12:09.839 Metadata: DUP 32.00MiB 00:12:09.839 System: DUP 8.00MiB 00:12:09.839 SSD detected: yes 00:12:09.839 Zoned device: no 00:12:09.839 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:09.839 Checksum: crc32c 00:12:09.839 Number of devices: 1 00:12:09.839 Devices: 00:12:09.839 ID SIZE PATH 00:12:09.839 1 510.00MiB /dev/nvme0n1p1 00:12:09.839 00:12:09.839 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:09.839 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:09.839 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:09.839 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:12:09.839 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:09.839 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:12:09.839 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:09.839 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:09.839 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 71234 00:12:09.839 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:09.839 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:09.839 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:09.839 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:09.839 ************************************ 00:12:09.839 END TEST filesystem_btrfs 00:12:09.839 ************************************ 00:12:09.839 00:12:09.839 real 0m0.196s 00:12:09.839 user 0m0.017s 00:12:09.839 sys 0m0.065s 00:12:09.839 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:09.839 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:10.098 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:12:10.098 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:10.098 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:10.098 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:10.098 ************************************ 00:12:10.098 START TEST filesystem_xfs 00:12:10.098 ************************************ 00:12:10.098 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:10.098 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:10.098 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:10.098 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:10.098 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:10.098 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:10.098 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:10.098 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:12:10.098 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:10.098 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:10.098 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:10.098 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:10.098 = sectsz=512 attr=2, projid32bit=1 00:12:10.098 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:10.098 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:10.098 data = bsize=4096 blocks=130560, imaxpct=25 00:12:10.098 = sunit=0 swidth=0 blks 00:12:10.098 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:10.098 log =internal log bsize=4096 blocks=16384, version=2 00:12:10.098 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:10.098 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:10.666 Discarding blocks...Done. 00:12:10.666 11:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:10.666 11:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:13.201 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:13.201 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:12:13.201 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:13.201 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:12:13.201 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:12:13.201 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:13.201 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 71234 00:12:13.201 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:13.201 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:13.201 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:13.201 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:13.201 ************************************ 00:12:13.201 END TEST filesystem_xfs 00:12:13.201 ************************************ 00:12:13.201 00:12:13.201 real 0m3.143s 00:12:13.201 user 0m0.020s 00:12:13.201 sys 0m0.054s 00:12:13.201 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:13.201 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:13.201 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:13.201 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:13.201 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:13.201 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:13.201 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:13.202 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:13.202 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:13.202 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:13.202 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:13.202 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:13.202 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:13.202 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:13.202 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.202 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:13.202 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.202 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:13.202 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 71234 00:12:13.202 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 71234 ']' 00:12:13.202 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 71234 00:12:13.202 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:13.202 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:13.202 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71234 00:12:13.202 killing process with pid 71234 00:12:13.202 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:13.202 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:13.202 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71234' 00:12:13.202 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 71234 00:12:13.202 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 71234 00:12:13.460 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:13.460 00:12:13.460 real 0m14.310s 00:12:13.460 user 0m54.915s 00:12:13.460 sys 0m1.990s 00:12:13.461 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:13.461 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:13.461 ************************************ 00:12:13.461 END TEST nvmf_filesystem_no_in_capsule 00:12:13.461 ************************************ 00:12:13.719 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:12:13.719 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:13.719 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:13.719 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:13.719 ************************************ 00:12:13.719 START TEST nvmf_filesystem_in_capsule 00:12:13.719 ************************************ 00:12:13.719 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:12:13.719 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:12:13.719 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:13.719 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:13.719 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:13.719 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:13.719 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=71602 00:12:13.719 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 71602 00:12:13.719 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 71602 ']' 00:12:13.719 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:13.719 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:13.719 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:13.719 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:13.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:13.719 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:13.719 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:13.719 [2024-11-20 11:24:59.141743] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:12:13.719 [2024-11-20 11:24:59.141831] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:13.719 [2024-11-20 11:24:59.293639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:13.978 [2024-11-20 11:24:59.351430] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:13.978 [2024-11-20 11:24:59.351514] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:13.978 [2024-11-20 11:24:59.351538] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:13.978 [2024-11-20 11:24:59.351553] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:13.978 [2024-11-20 11:24:59.351568] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:13.978 [2024-11-20 11:24:59.352726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:13.978 [2024-11-20 11:24:59.352872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:13.978 [2024-11-20 11:24:59.353033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:13.978 [2024-11-20 11:24:59.353047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:13.978 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:13.978 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:12:13.978 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:13.978 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:13.978 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:13.978 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:13.978 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:13.978 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:12:13.978 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.978 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:13.978 [2024-11-20 11:24:59.494762] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:13.978 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.978 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:13.978 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.978 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:14.236 Malloc1 00:12:14.236 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.236 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:14.236 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.236 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:14.236 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.236 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:14.236 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.236 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:14.236 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.236 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:14.236 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.236 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:14.236 [2024-11-20 11:24:59.619894] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:14.236 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.236 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:14.236 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:12:14.236 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:12:14.236 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:12:14.236 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:12:14.236 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:14.236 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.236 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:14.236 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.236 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:12:14.236 { 00:12:14.236 "aliases": [ 00:12:14.236 "a4e3139a-62ae-4369-a774-c74680f20686" 00:12:14.236 ], 00:12:14.236 "assigned_rate_limits": { 00:12:14.236 "r_mbytes_per_sec": 0, 00:12:14.236 "rw_ios_per_sec": 0, 00:12:14.236 "rw_mbytes_per_sec": 0, 00:12:14.236 "w_mbytes_per_sec": 0 00:12:14.236 }, 00:12:14.236 "block_size": 512, 00:12:14.236 "claim_type": "exclusive_write", 00:12:14.236 "claimed": true, 00:12:14.236 "driver_specific": {}, 00:12:14.236 "memory_domains": [ 00:12:14.236 { 00:12:14.236 "dma_device_id": "system", 00:12:14.236 "dma_device_type": 1 00:12:14.236 }, 00:12:14.236 { 00:12:14.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.236 "dma_device_type": 2 00:12:14.236 } 00:12:14.236 ], 00:12:14.236 "name": "Malloc1", 00:12:14.236 "num_blocks": 1048576, 00:12:14.236 "product_name": "Malloc disk", 00:12:14.236 "supported_io_types": { 00:12:14.236 "abort": true, 00:12:14.236 "compare": false, 00:12:14.236 "compare_and_write": false, 00:12:14.236 "copy": true, 00:12:14.236 "flush": true, 00:12:14.236 "get_zone_info": false, 00:12:14.236 "nvme_admin": false, 00:12:14.236 "nvme_io": false, 00:12:14.236 "nvme_io_md": false, 00:12:14.236 "nvme_iov_md": false, 00:12:14.236 "read": true, 00:12:14.236 "reset": true, 00:12:14.236 "seek_data": false, 00:12:14.236 "seek_hole": false, 00:12:14.236 "unmap": true, 00:12:14.236 "write": true, 00:12:14.236 "write_zeroes": true, 00:12:14.236 "zcopy": true, 00:12:14.237 "zone_append": false, 00:12:14.237 "zone_management": false 00:12:14.237 }, 00:12:14.237 "uuid": "a4e3139a-62ae-4369-a774-c74680f20686", 00:12:14.237 "zoned": false 00:12:14.237 } 00:12:14.237 ]' 00:12:14.237 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:12:14.237 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:12:14.237 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:12:14.237 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:12:14.237 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:12:14.237 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:12:14.237 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:14.237 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid=806d442c-08ba-4719-b76d-33169283459a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:12:14.495 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:14.495 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:12:14.495 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:14.495 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:14.495 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:12:16.393 11:25:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:16.393 11:25:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:16.393 11:25:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:16.393 11:25:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:16.393 11:25:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:16.393 11:25:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:12:16.394 11:25:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:16.394 11:25:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:16.653 11:25:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:16.653 11:25:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:16.653 11:25:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:16.653 11:25:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:16.653 11:25:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:16.653 11:25:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:16.653 11:25:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:16.653 11:25:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:16.653 11:25:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:16.653 11:25:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:16.653 11:25:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:17.588 11:25:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:12:17.588 11:25:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:17.588 11:25:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:17.588 11:25:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:17.588 11:25:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:17.588 ************************************ 00:12:17.588 START TEST filesystem_in_capsule_ext4 00:12:17.588 ************************************ 00:12:17.588 11:25:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:17.588 11:25:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:17.588 11:25:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:17.588 11:25:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:17.588 11:25:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:12:17.588 11:25:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:17.588 11:25:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:12:17.588 11:25:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:12:17.588 11:25:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:12:17.588 11:25:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:12:17.588 11:25:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:17.588 mke2fs 1.47.0 (5-Feb-2023) 00:12:17.588 Discarding device blocks: 0/522240 done 00:12:17.588 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:17.588 Filesystem UUID: 6589bf82-28ac-4b51-bdc2-cbd2989d6a5c 00:12:17.588 Superblock backups stored on blocks: 00:12:17.588 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:17.588 00:12:17.588 Allocating group tables: 0/64 done 00:12:17.588 Writing inode tables: 0/64 done 00:12:17.847 Creating journal (8192 blocks): done 00:12:17.847 Writing superblocks and filesystem accounting information: 0/64 done 00:12:17.847 00:12:17.847 11:25:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:12:17.847 11:25:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:23.116 11:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:23.116 11:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:23.116 11:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:23.116 11:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:23.116 11:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:23.116 11:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:23.116 11:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 71602 00:12:23.116 11:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:23.116 11:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:23.116 11:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:23.116 11:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:23.116 ************************************ 00:12:23.116 END TEST filesystem_in_capsule_ext4 00:12:23.116 ************************************ 00:12:23.116 00:12:23.116 real 0m5.505s 00:12:23.116 user 0m0.026s 00:12:23.116 sys 0m0.060s 00:12:23.116 11:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:23.116 11:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:23.116 11:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:23.116 11:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:23.116 11:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:23.116 11:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:23.116 ************************************ 00:12:23.116 START TEST filesystem_in_capsule_btrfs 00:12:23.116 ************************************ 00:12:23.116 11:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:23.116 11:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:23.116 11:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:23.116 11:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:23.116 11:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:23.116 11:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:23.116 11:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:23.116 11:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:23.116 11:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:23.116 11:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:23.116 11:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:23.376 btrfs-progs v6.8.1 00:12:23.376 See https://btrfs.readthedocs.io for more information. 00:12:23.376 00:12:23.376 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:23.376 NOTE: several default settings have changed in version 5.15, please make sure 00:12:23.376 this does not affect your deployments: 00:12:23.376 - DUP for metadata (-m dup) 00:12:23.376 - enabled no-holes (-O no-holes) 00:12:23.376 - enabled free-space-tree (-R free-space-tree) 00:12:23.376 00:12:23.376 Label: (null) 00:12:23.376 UUID: f463b926-6df3-41bf-8e57-7ac3178291c2 00:12:23.376 Node size: 16384 00:12:23.376 Sector size: 4096 (CPU page size: 4096) 00:12:23.376 Filesystem size: 510.00MiB 00:12:23.376 Block group profiles: 00:12:23.376 Data: single 8.00MiB 00:12:23.376 Metadata: DUP 32.00MiB 00:12:23.376 System: DUP 8.00MiB 00:12:23.376 SSD detected: yes 00:12:23.376 Zoned device: no 00:12:23.376 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:23.376 Checksum: crc32c 00:12:23.376 Number of devices: 1 00:12:23.376 Devices: 00:12:23.376 ID SIZE PATH 00:12:23.376 1 510.00MiB /dev/nvme0n1p1 00:12:23.376 00:12:23.376 11:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:23.376 11:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:23.376 11:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:23.376 11:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:23.376 11:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:23.376 11:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:23.376 11:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:23.376 11:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:23.376 11:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 71602 00:12:23.376 11:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:23.376 11:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:23.376 11:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:23.376 11:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:23.376 ************************************ 00:12:23.376 END TEST filesystem_in_capsule_btrfs 00:12:23.376 ************************************ 00:12:23.376 00:12:23.376 real 0m0.173s 00:12:23.376 user 0m0.016s 00:12:23.376 sys 0m0.059s 00:12:23.376 11:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:23.376 11:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:23.376 11:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:23.376 11:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:23.376 11:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:23.376 11:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:23.376 ************************************ 00:12:23.376 START TEST filesystem_in_capsule_xfs 00:12:23.376 ************************************ 00:12:23.376 11:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:23.376 11:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:23.376 11:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:23.376 11:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:23.376 11:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:23.376 11:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:23.376 11:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:23.376 11:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:12:23.376 11:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:23.376 11:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:23.376 11:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:23.376 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:23.376 = sectsz=512 attr=2, projid32bit=1 00:12:23.376 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:23.376 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:23.376 data = bsize=4096 blocks=130560, imaxpct=25 00:12:23.376 = sunit=0 swidth=0 blks 00:12:23.376 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:23.376 log =internal log bsize=4096 blocks=16384, version=2 00:12:23.376 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:23.376 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:24.312 Discarding blocks...Done. 00:12:24.312 11:25:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:24.312 11:25:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:26.215 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:26.215 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:26.215 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:26.215 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:26.215 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:26.215 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:26.215 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 71602 00:12:26.215 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:26.215 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:26.215 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:26.215 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:26.215 ************************************ 00:12:26.215 END TEST filesystem_in_capsule_xfs 00:12:26.215 ************************************ 00:12:26.215 00:12:26.215 real 0m2.503s 00:12:26.215 user 0m0.017s 00:12:26.215 sys 0m0.047s 00:12:26.215 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:26.215 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:26.215 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:26.215 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:26.215 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:26.215 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:26.215 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:26.215 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:26.215 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:26.215 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:26.215 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:26.215 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:26.215 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:26.215 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:26.215 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.215 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:26.215 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.215 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:26.215 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 71602 00:12:26.215 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 71602 ']' 00:12:26.215 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 71602 00:12:26.215 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:26.215 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:26.215 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71602 00:12:26.215 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:26.215 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:26.215 killing process with pid 71602 00:12:26.215 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71602' 00:12:26.216 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 71602 00:12:26.216 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 71602 00:12:26.216 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:26.216 00:12:26.216 real 0m12.719s 00:12:26.216 user 0m48.354s 00:12:26.216 sys 0m1.875s 00:12:26.216 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:26.216 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:26.216 ************************************ 00:12:26.216 END TEST nvmf_filesystem_in_capsule 00:12:26.216 ************************************ 00:12:26.474 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:26.474 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:26.474 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:12:26.474 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:26.474 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:12:26.474 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:26.474 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:26.474 rmmod nvme_tcp 00:12:26.474 rmmod nvme_fabrics 00:12:26.474 rmmod nvme_keyring 00:12:26.474 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:26.474 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:12:26.474 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:12:26.474 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:12:26.474 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:26.474 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:26.474 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:26.474 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:12:26.474 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:12:26.474 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:12:26.474 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:26.474 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:26.474 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:26.474 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:26.474 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:26.474 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:26.474 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:26.474 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:26.474 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:26.474 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:26.474 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:26.474 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:26.474 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:26.474 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:26.732 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:26.732 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:26.732 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:26.732 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:26.732 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:26.732 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:26.732 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@300 -- # return 0 00:12:26.732 00:12:26.732 real 0m28.373s 00:12:26.732 user 1m43.724s 00:12:26.732 sys 0m4.385s 00:12:26.732 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:26.732 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:26.732 ************************************ 00:12:26.732 END TEST nvmf_filesystem 00:12:26.732 ************************************ 00:12:26.732 11:25:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:26.732 11:25:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:26.732 11:25:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:26.732 11:25:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:26.732 ************************************ 00:12:26.732 START TEST nvmf_target_discovery 00:12:26.732 ************************************ 00:12:26.732 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:26.732 * Looking for test storage... 00:12:26.733 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:26.733 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:26.733 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:12:26.733 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:26.992 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:26.992 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:26.992 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:26.992 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:26.992 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:12:26.992 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:12:26.992 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:12:26.992 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:12:26.992 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:12:26.992 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:12:26.992 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:12:26.992 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:26.992 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:12:26.992 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:12:26.992 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:26.992 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:26.992 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:12:26.992 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:12:26.992 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:26.992 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:12:26.992 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:12:26.992 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:12:26.992 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:12:26.992 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:26.992 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:12:26.992 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:12:26.992 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:26.992 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:26.992 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:12:26.992 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:26.992 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:26.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.992 --rc genhtml_branch_coverage=1 00:12:26.992 --rc genhtml_function_coverage=1 00:12:26.992 --rc genhtml_legend=1 00:12:26.992 --rc geninfo_all_blocks=1 00:12:26.992 --rc geninfo_unexecuted_blocks=1 00:12:26.992 00:12:26.992 ' 00:12:26.992 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:26.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.992 --rc genhtml_branch_coverage=1 00:12:26.992 --rc genhtml_function_coverage=1 00:12:26.992 --rc genhtml_legend=1 00:12:26.992 --rc geninfo_all_blocks=1 00:12:26.992 --rc geninfo_unexecuted_blocks=1 00:12:26.992 00:12:26.992 ' 00:12:26.992 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:26.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.992 --rc genhtml_branch_coverage=1 00:12:26.992 --rc genhtml_function_coverage=1 00:12:26.992 --rc genhtml_legend=1 00:12:26.992 --rc geninfo_all_blocks=1 00:12:26.992 --rc geninfo_unexecuted_blocks=1 00:12:26.992 00:12:26.992 ' 00:12:26.992 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:26.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.992 --rc genhtml_branch_coverage=1 00:12:26.992 --rc genhtml_function_coverage=1 00:12:26.992 --rc genhtml_legend=1 00:12:26.992 --rc geninfo_all_blocks=1 00:12:26.992 --rc geninfo_unexecuted_blocks=1 00:12:26.992 00:12:26.992 ' 00:12:26.992 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:26.992 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:26.992 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:26.992 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:26.992 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:26.992 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:26.992 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:26.992 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:26.992 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:26.992 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:26.992 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:26.992 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:26.992 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:12:26.992 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=806d442c-08ba-4719-b76d-33169283459a 00:12:26.992 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:26.992 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:26.992 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:26.992 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:26.992 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:26.992 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:12:26.992 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:26.992 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:26.992 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:26.993 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.993 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.993 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.993 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:26.993 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.993 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:12:26.993 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:26.993 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:26.993 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:26.993 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:26.993 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:26.993 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:26.993 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:26.993 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:26.993 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:26.993 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:26.993 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:26.993 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:26.993 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:26.993 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:26.993 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:26.993 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:26.993 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:26.993 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:26.993 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:26.993 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:26.993 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:26.993 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:26.993 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:26.993 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:26.993 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:26.993 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:26.993 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:26.993 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:26.993 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:26.993 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:26.993 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:26.993 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:26.993 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:26.993 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:26.993 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:26.993 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:26.993 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:26.993 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:26.993 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:26.993 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:26.993 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:26.993 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:26.993 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:26.993 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:26.993 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:26.993 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:26.993 Cannot find device "nvmf_init_br" 00:12:26.993 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@162 -- # true 00:12:26.993 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:26.993 Cannot find device "nvmf_init_br2" 00:12:26.993 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@163 -- # true 00:12:26.993 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:26.993 Cannot find device "nvmf_tgt_br" 00:12:26.993 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@164 -- # true 00:12:26.993 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:26.993 Cannot find device "nvmf_tgt_br2" 00:12:26.993 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@165 -- # true 00:12:26.993 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:26.993 Cannot find device "nvmf_init_br" 00:12:26.993 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@166 -- # true 00:12:26.993 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:26.993 Cannot find device "nvmf_init_br2" 00:12:26.993 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@167 -- # true 00:12:26.994 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:26.994 Cannot find device "nvmf_tgt_br" 00:12:26.994 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@168 -- # true 00:12:26.994 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:26.994 Cannot find device "nvmf_tgt_br2" 00:12:26.994 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@169 -- # true 00:12:26.994 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:26.994 Cannot find device "nvmf_br" 00:12:26.994 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@170 -- # true 00:12:26.994 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:26.994 Cannot find device "nvmf_init_if" 00:12:26.994 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@171 -- # true 00:12:26.994 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:26.994 Cannot find device "nvmf_init_if2" 00:12:26.994 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@172 -- # true 00:12:26.994 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:26.994 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:26.994 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@173 -- # true 00:12:26.994 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:26.994 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:26.994 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@174 -- # true 00:12:26.994 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:26.994 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:26.994 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:26.994 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:26.994 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:26.994 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:27.252 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:27.252 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:27.252 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:27.252 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:27.252 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:27.252 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:27.252 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:27.252 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:27.252 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:27.252 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:27.252 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:27.252 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:27.252 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:27.252 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:27.252 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:27.252 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:27.252 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:27.252 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:27.252 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:27.252 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:27.252 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:27.252 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:27.252 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:27.252 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:27.252 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:27.252 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:27.252 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:27.252 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:27.252 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.095 ms 00:12:27.252 00:12:27.252 --- 10.0.0.3 ping statistics --- 00:12:27.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:27.252 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:12:27.252 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:27.252 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:27.252 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms 00:12:27.252 00:12:27.252 --- 10.0.0.4 ping statistics --- 00:12:27.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:27.252 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:12:27.252 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:27.253 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:27.253 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:12:27.253 00:12:27.253 --- 10.0.0.1 ping statistics --- 00:12:27.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:27.253 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:12:27.253 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:27.253 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:27.253 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.103 ms 00:12:27.253 00:12:27.253 --- 10.0.0.2 ping statistics --- 00:12:27.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:27.253 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:12:27.253 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:27.253 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@461 -- # return 0 00:12:27.253 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:27.253 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:27.253 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:27.253 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:27.253 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:27.253 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:27.253 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:27.253 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:27.253 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:27.253 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:27.253 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:27.253 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=72179 00:12:27.253 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:27.253 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 72179 00:12:27.253 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 72179 ']' 00:12:27.253 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:27.253 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:27.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:27.253 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:27.253 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:27.253 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:27.512 [2024-11-20 11:25:12.878781] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:12:27.512 [2024-11-20 11:25:12.878888] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:27.512 [2024-11-20 11:25:13.039720] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:27.512 [2024-11-20 11:25:13.082455] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:27.512 [2024-11-20 11:25:13.082542] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:27.512 [2024-11-20 11:25:13.082566] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:27.512 [2024-11-20 11:25:13.082583] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:27.512 [2024-11-20 11:25:13.082599] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:27.512 [2024-11-20 11:25:13.083545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:27.512 [2024-11-20 11:25:13.083703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:27.512 [2024-11-20 11:25:13.083763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:27.512 [2024-11-20 11:25:13.083768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.771 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:27.771 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:12:27.771 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:27.771 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:27.771 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:27.771 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:27.771 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:27.771 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.771 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:27.771 [2024-11-20 11:25:13.226150] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:27.771 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.771 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:27.771 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:27.771 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:27.771 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.771 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:27.771 Null1 00:12:27.771 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.771 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:27.771 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.771 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:27.771 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.771 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:27.771 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.771 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:27.771 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.771 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:27.771 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.771 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:27.771 [2024-11-20 11:25:13.282481] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:27.771 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.771 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:27.771 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:27.771 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.771 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:27.771 Null2 00:12:27.771 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.771 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:27.771 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.771 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:27.771 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.771 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:27.771 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.771 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:27.771 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.771 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:12:27.771 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.771 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:27.771 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.771 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:27.771 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:27.771 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.771 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:27.771 Null3 00:12:27.771 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.771 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:27.771 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.771 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:27.771 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.771 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:27.771 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.771 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:27.771 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.771 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:12:27.772 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.772 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:27.772 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.772 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:27.772 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:27.772 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.772 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.030 Null4 00:12:28.030 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.030 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:28.030 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.030 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.030 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.030 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:28.030 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.030 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.030 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.030 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.3 -s 4420 00:12:28.030 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.030 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.030 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.030 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:12:28.030 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.030 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.030 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.030 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.3 -s 4430 00:12:28.031 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.031 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.031 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.031 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid=806d442c-08ba-4719-b76d-33169283459a -t tcp -a 10.0.0.3 -s 4420 00:12:28.031 00:12:28.031 Discovery Log Number of Records 6, Generation counter 6 00:12:28.031 =====Discovery Log Entry 0====== 00:12:28.031 trtype: tcp 00:12:28.031 adrfam: ipv4 00:12:28.031 subtype: current discovery subsystem 00:12:28.031 treq: not required 00:12:28.031 portid: 0 00:12:28.031 trsvcid: 4420 00:12:28.031 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:28.031 traddr: 10.0.0.3 00:12:28.031 eflags: explicit discovery connections, duplicate discovery information 00:12:28.031 sectype: none 00:12:28.031 =====Discovery Log Entry 1====== 00:12:28.031 trtype: tcp 00:12:28.031 adrfam: ipv4 00:12:28.031 subtype: nvme subsystem 00:12:28.031 treq: not required 00:12:28.031 portid: 0 00:12:28.031 trsvcid: 4420 00:12:28.031 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:28.031 traddr: 10.0.0.3 00:12:28.031 eflags: none 00:12:28.031 sectype: none 00:12:28.031 =====Discovery Log Entry 2====== 00:12:28.031 trtype: tcp 00:12:28.031 adrfam: ipv4 00:12:28.031 subtype: nvme subsystem 00:12:28.031 treq: not required 00:12:28.031 portid: 0 00:12:28.031 trsvcid: 4420 00:12:28.031 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:28.031 traddr: 10.0.0.3 00:12:28.031 eflags: none 00:12:28.031 sectype: none 00:12:28.031 =====Discovery Log Entry 3====== 00:12:28.031 trtype: tcp 00:12:28.031 adrfam: ipv4 00:12:28.031 subtype: nvme subsystem 00:12:28.031 treq: not required 00:12:28.031 portid: 0 00:12:28.031 trsvcid: 4420 00:12:28.031 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:28.031 traddr: 10.0.0.3 00:12:28.031 eflags: none 00:12:28.031 sectype: none 00:12:28.031 =====Discovery Log Entry 4====== 00:12:28.031 trtype: tcp 00:12:28.031 adrfam: ipv4 00:12:28.031 subtype: nvme subsystem 00:12:28.031 treq: not required 00:12:28.031 portid: 0 00:12:28.031 trsvcid: 4420 00:12:28.031 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:28.031 traddr: 10.0.0.3 00:12:28.031 eflags: none 00:12:28.031 sectype: none 00:12:28.031 =====Discovery Log Entry 5====== 00:12:28.031 trtype: tcp 00:12:28.031 adrfam: ipv4 00:12:28.031 subtype: discovery subsystem referral 00:12:28.031 treq: not required 00:12:28.031 portid: 0 00:12:28.031 trsvcid: 4430 00:12:28.031 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:28.031 traddr: 10.0.0.3 00:12:28.031 eflags: none 00:12:28.031 sectype: none 00:12:28.031 Perform nvmf subsystem discovery via RPC 00:12:28.031 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:28.031 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:28.031 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.031 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.031 [ 00:12:28.031 { 00:12:28.031 "allow_any_host": true, 00:12:28.031 "hosts": [], 00:12:28.031 "listen_addresses": [ 00:12:28.031 { 00:12:28.031 "adrfam": "IPv4", 00:12:28.031 "traddr": "10.0.0.3", 00:12:28.031 "trsvcid": "4420", 00:12:28.031 "trtype": "TCP" 00:12:28.031 } 00:12:28.031 ], 00:12:28.031 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:28.031 "subtype": "Discovery" 00:12:28.031 }, 00:12:28.031 { 00:12:28.031 "allow_any_host": true, 00:12:28.031 "hosts": [], 00:12:28.031 "listen_addresses": [ 00:12:28.031 { 00:12:28.031 "adrfam": "IPv4", 00:12:28.031 "traddr": "10.0.0.3", 00:12:28.031 "trsvcid": "4420", 00:12:28.031 "trtype": "TCP" 00:12:28.031 } 00:12:28.031 ], 00:12:28.031 "max_cntlid": 65519, 00:12:28.031 "max_namespaces": 32, 00:12:28.031 "min_cntlid": 1, 00:12:28.031 "model_number": "SPDK bdev Controller", 00:12:28.031 "namespaces": [ 00:12:28.031 { 00:12:28.031 "bdev_name": "Null1", 00:12:28.031 "name": "Null1", 00:12:28.031 "nguid": "079FAF83C0674FB3A87A976BA084306F", 00:12:28.031 "nsid": 1, 00:12:28.031 "uuid": "079faf83-c067-4fb3-a87a-976ba084306f" 00:12:28.031 } 00:12:28.031 ], 00:12:28.031 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:28.031 "serial_number": "SPDK00000000000001", 00:12:28.031 "subtype": "NVMe" 00:12:28.031 }, 00:12:28.031 { 00:12:28.031 "allow_any_host": true, 00:12:28.031 "hosts": [], 00:12:28.031 "listen_addresses": [ 00:12:28.031 { 00:12:28.031 "adrfam": "IPv4", 00:12:28.031 "traddr": "10.0.0.3", 00:12:28.031 "trsvcid": "4420", 00:12:28.031 "trtype": "TCP" 00:12:28.031 } 00:12:28.031 ], 00:12:28.031 "max_cntlid": 65519, 00:12:28.031 "max_namespaces": 32, 00:12:28.031 "min_cntlid": 1, 00:12:28.031 "model_number": "SPDK bdev Controller", 00:12:28.031 "namespaces": [ 00:12:28.031 { 00:12:28.031 "bdev_name": "Null2", 00:12:28.031 "name": "Null2", 00:12:28.031 "nguid": "09AC19CC952143EABADF3A92B844D5C4", 00:12:28.031 "nsid": 1, 00:12:28.031 "uuid": "09ac19cc-9521-43ea-badf-3a92b844d5c4" 00:12:28.031 } 00:12:28.031 ], 00:12:28.031 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:28.031 "serial_number": "SPDK00000000000002", 00:12:28.031 "subtype": "NVMe" 00:12:28.031 }, 00:12:28.031 { 00:12:28.031 "allow_any_host": true, 00:12:28.031 "hosts": [], 00:12:28.031 "listen_addresses": [ 00:12:28.031 { 00:12:28.031 "adrfam": "IPv4", 00:12:28.031 "traddr": "10.0.0.3", 00:12:28.031 "trsvcid": "4420", 00:12:28.031 "trtype": "TCP" 00:12:28.031 } 00:12:28.031 ], 00:12:28.031 "max_cntlid": 65519, 00:12:28.031 "max_namespaces": 32, 00:12:28.031 "min_cntlid": 1, 00:12:28.031 "model_number": "SPDK bdev Controller", 00:12:28.031 "namespaces": [ 00:12:28.031 { 00:12:28.031 "bdev_name": "Null3", 00:12:28.031 "name": "Null3", 00:12:28.031 "nguid": "B5C50A4AEECD4970AA57BF4BFB0A2846", 00:12:28.031 "nsid": 1, 00:12:28.031 "uuid": "b5c50a4a-eecd-4970-aa57-bf4bfb0a2846" 00:12:28.031 } 00:12:28.031 ], 00:12:28.031 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:28.031 "serial_number": "SPDK00000000000003", 00:12:28.031 "subtype": "NVMe" 00:12:28.031 }, 00:12:28.031 { 00:12:28.031 "allow_any_host": true, 00:12:28.031 "hosts": [], 00:12:28.031 "listen_addresses": [ 00:12:28.031 { 00:12:28.031 "adrfam": "IPv4", 00:12:28.031 "traddr": "10.0.0.3", 00:12:28.031 "trsvcid": "4420", 00:12:28.031 "trtype": "TCP" 00:12:28.031 } 00:12:28.031 ], 00:12:28.031 "max_cntlid": 65519, 00:12:28.031 "max_namespaces": 32, 00:12:28.031 "min_cntlid": 1, 00:12:28.031 "model_number": "SPDK bdev Controller", 00:12:28.031 "namespaces": [ 00:12:28.031 { 00:12:28.031 "bdev_name": "Null4", 00:12:28.031 "name": "Null4", 00:12:28.031 "nguid": "F4451D20C8D94059BD216776DDEEAEAF", 00:12:28.031 "nsid": 1, 00:12:28.031 "uuid": "f4451d20-c8d9-4059-bd21-6776ddeeaeaf" 00:12:28.031 } 00:12:28.031 ], 00:12:28.031 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:28.031 "serial_number": "SPDK00000000000004", 00:12:28.031 "subtype": "NVMe" 00:12:28.031 } 00:12:28.031 ] 00:12:28.031 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.031 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:28.031 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:28.031 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:28.031 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.031 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.031 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.031 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:28.031 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.031 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.031 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.031 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:28.031 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:28.031 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.031 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.031 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.031 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:28.031 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.031 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.031 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.032 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:28.032 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:28.032 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.032 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.032 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.032 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:28.032 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.032 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.032 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.032 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:28.032 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:28.032 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.032 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.290 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.290 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:28.290 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.290 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.290 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.290 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.3 -s 4430 00:12:28.290 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.290 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.290 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.290 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:28.290 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:28.290 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.290 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.290 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.290 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:28.290 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:28.290 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:28.290 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:28.290 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:28.290 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:28.290 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:28.290 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:28.290 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:28.290 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:28.290 rmmod nvme_tcp 00:12:28.290 rmmod nvme_fabrics 00:12:28.290 rmmod nvme_keyring 00:12:28.290 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:28.290 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:12:28.290 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:12:28.290 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 72179 ']' 00:12:28.290 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 72179 00:12:28.290 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 72179 ']' 00:12:28.290 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 72179 00:12:28.290 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:12:28.290 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:28.290 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72179 00:12:28.290 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:28.290 killing process with pid 72179 00:12:28.290 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:28.290 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72179' 00:12:28.290 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 72179 00:12:28.290 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 72179 00:12:28.549 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:28.549 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:28.549 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:28.549 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:12:28.549 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:12:28.549 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:12:28.549 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:28.549 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:28.549 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:28.549 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:28.549 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:28.549 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:28.549 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:28.549 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:28.549 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:28.549 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:28.549 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:28.549 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:28.549 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:28.549 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:28.549 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:28.808 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:28.808 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:28.808 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:28.808 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:28.808 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:28.808 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@300 -- # return 0 00:12:28.808 00:12:28.808 real 0m1.984s 00:12:28.808 user 0m3.666s 00:12:28.808 sys 0m0.676s 00:12:28.808 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:28.808 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.808 ************************************ 00:12:28.808 END TEST nvmf_target_discovery 00:12:28.808 ************************************ 00:12:28.808 11:25:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:28.808 11:25:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:28.808 11:25:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:28.808 11:25:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:28.808 ************************************ 00:12:28.808 START TEST nvmf_referrals 00:12:28.808 ************************************ 00:12:28.808 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:28.808 * Looking for test storage... 00:12:28.808 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:28.808 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:28.808 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:12:28.808 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:29.067 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:29.068 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:29.068 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:29.068 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:29.068 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:29.068 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:29.068 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:29.068 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:29.068 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:29.068 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:29.068 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:29.068 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:29.068 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:29.068 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:29.068 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:29.068 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:29.068 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:29.068 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:29.068 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:29.068 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:29.068 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:29.068 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:29.068 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:29.068 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:29.068 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:29.068 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:29.068 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:29.068 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:29.068 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:29.068 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:29.068 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:29.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.068 --rc genhtml_branch_coverage=1 00:12:29.068 --rc genhtml_function_coverage=1 00:12:29.068 --rc genhtml_legend=1 00:12:29.068 --rc geninfo_all_blocks=1 00:12:29.068 --rc geninfo_unexecuted_blocks=1 00:12:29.068 00:12:29.068 ' 00:12:29.068 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:29.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.068 --rc genhtml_branch_coverage=1 00:12:29.068 --rc genhtml_function_coverage=1 00:12:29.068 --rc genhtml_legend=1 00:12:29.068 --rc geninfo_all_blocks=1 00:12:29.068 --rc geninfo_unexecuted_blocks=1 00:12:29.068 00:12:29.068 ' 00:12:29.068 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:29.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.068 --rc genhtml_branch_coverage=1 00:12:29.068 --rc genhtml_function_coverage=1 00:12:29.068 --rc genhtml_legend=1 00:12:29.068 --rc geninfo_all_blocks=1 00:12:29.068 --rc geninfo_unexecuted_blocks=1 00:12:29.068 00:12:29.068 ' 00:12:29.068 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:29.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.068 --rc genhtml_branch_coverage=1 00:12:29.068 --rc genhtml_function_coverage=1 00:12:29.068 --rc genhtml_legend=1 00:12:29.068 --rc geninfo_all_blocks=1 00:12:29.068 --rc geninfo_unexecuted_blocks=1 00:12:29.068 00:12:29.068 ' 00:12:29.068 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:29.068 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:29.068 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:29.068 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:29.068 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:29.068 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:29.068 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:29.068 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:29.068 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:29.068 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:29.068 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:29.068 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:29.068 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:12:29.068 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=806d442c-08ba-4719-b76d-33169283459a 00:12:29.068 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:29.068 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:29.068 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:29.068 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:29.068 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:29.068 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:29.068 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:29.068 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:29.068 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:29.068 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.068 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.068 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.068 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:29.068 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.068 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:29.068 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:29.068 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:29.068 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:29.068 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:29.068 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:29.068 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:29.068 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:29.068 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:29.068 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:29.068 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:29.068 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:29.068 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:29.068 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:29.068 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:29.069 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:29.069 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:29.069 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:29.069 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:29.069 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:29.069 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:29.069 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:29.069 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:29.069 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:29.069 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:29.069 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:29.069 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:29.069 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:29.069 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:29.069 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:29.069 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:29.069 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:29.069 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:29.069 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:29.069 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:29.069 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:29.069 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:29.069 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:29.069 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:29.069 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:29.069 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:29.069 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:29.069 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:29.069 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:29.069 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:29.069 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:29.069 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:29.069 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:29.069 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:29.069 Cannot find device "nvmf_init_br" 00:12:29.069 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@162 -- # true 00:12:29.069 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:29.069 Cannot find device "nvmf_init_br2" 00:12:29.069 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@163 -- # true 00:12:29.069 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:29.069 Cannot find device "nvmf_tgt_br" 00:12:29.069 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@164 -- # true 00:12:29.069 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:29.069 Cannot find device "nvmf_tgt_br2" 00:12:29.069 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@165 -- # true 00:12:29.069 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:29.069 Cannot find device "nvmf_init_br" 00:12:29.069 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@166 -- # true 00:12:29.069 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:29.069 Cannot find device "nvmf_init_br2" 00:12:29.069 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@167 -- # true 00:12:29.069 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:29.069 Cannot find device "nvmf_tgt_br" 00:12:29.069 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@168 -- # true 00:12:29.069 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:29.069 Cannot find device "nvmf_tgt_br2" 00:12:29.069 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@169 -- # true 00:12:29.069 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:29.069 Cannot find device "nvmf_br" 00:12:29.069 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@170 -- # true 00:12:29.069 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:29.069 Cannot find device "nvmf_init_if" 00:12:29.069 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@171 -- # true 00:12:29.069 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:29.069 Cannot find device "nvmf_init_if2" 00:12:29.069 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@172 -- # true 00:12:29.069 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:29.069 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:29.069 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@173 -- # true 00:12:29.069 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:29.069 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:29.069 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@174 -- # true 00:12:29.069 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:29.069 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:29.069 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:29.069 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:29.069 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:29.069 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:29.327 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:29.327 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:29.327 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:29.327 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:29.327 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:29.327 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:29.327 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:29.327 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:29.327 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:29.327 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:29.327 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:29.327 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:29.327 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:29.327 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:29.327 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:29.327 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:29.327 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:29.327 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:29.327 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:29.327 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:29.327 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:29.327 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:29.327 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:29.327 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:29.327 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:29.327 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:29.327 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:29.327 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:29.327 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.091 ms 00:12:29.327 00:12:29.327 --- 10.0.0.3 ping statistics --- 00:12:29.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:29.327 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:12:29.327 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:29.327 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:29.327 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:12:29.327 00:12:29.327 --- 10.0.0.4 ping statistics --- 00:12:29.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:29.327 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:12:29.327 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:29.327 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:29.327 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:12:29.327 00:12:29.327 --- 10.0.0.1 ping statistics --- 00:12:29.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:29.327 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:12:29.327 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:29.327 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:29.327 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:12:29.327 00:12:29.327 --- 10.0.0.2 ping statistics --- 00:12:29.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:29.328 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:12:29.328 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:29.328 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@461 -- # return 0 00:12:29.328 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:29.328 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:29.328 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:29.328 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:29.328 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:29.328 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:29.328 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:29.328 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:29.328 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:29.328 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:29.328 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:29.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:29.328 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=72443 00:12:29.328 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:29.328 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 72443 00:12:29.328 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 72443 ']' 00:12:29.328 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:29.328 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:29.328 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:29.328 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:29.328 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:29.586 [2024-11-20 11:25:14.937327] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:12:29.586 [2024-11-20 11:25:14.937429] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:29.586 [2024-11-20 11:25:15.085406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:29.586 [2024-11-20 11:25:15.126703] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:29.586 [2024-11-20 11:25:15.126967] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:29.586 [2024-11-20 11:25:15.127150] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:29.586 [2024-11-20 11:25:15.127358] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:29.586 [2024-11-20 11:25:15.127501] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:29.586 [2024-11-20 11:25:15.128508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:29.586 [2024-11-20 11:25:15.128667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:29.586 [2024-11-20 11:25:15.128669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.586 [2024-11-20 11:25:15.128576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:29.845 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:29.845 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:12:29.845 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:29.845 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:29.845 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:29.845 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:29.845 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:29.845 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.845 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:29.845 [2024-11-20 11:25:15.300152] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:29.845 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.845 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.3 -s 8009 discovery 00:12:29.845 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.845 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:29.845 [2024-11-20 11:25:15.316856] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:12:29.845 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.845 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:29.845 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.845 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:29.845 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.845 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:29.845 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.845 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:29.845 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.845 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:29.845 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.845 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:29.845 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.845 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:29.845 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:29.845 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.845 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:29.845 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.845 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:29.845 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:29.845 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:29.845 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:29.845 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.845 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:29.845 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:29.846 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:29.846 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.104 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:30.104 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:30.104 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:30.104 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:30.104 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:30.104 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid=806d442c-08ba-4719-b76d-33169283459a -t tcp -a 10.0.0.3 -s 8009 -o json 00:12:30.104 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:30.104 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:30.104 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:30.104 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:30.104 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:30.104 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.104 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:30.105 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.105 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:30.105 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.105 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:30.105 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.105 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:30.105 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.105 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:30.105 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.105 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:30.105 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.105 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:30.105 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:30.105 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.105 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:30.105 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:30.105 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:30.105 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:30.105 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid=806d442c-08ba-4719-b76d-33169283459a -t tcp -a 10.0.0.3 -s 8009 -o json 00:12:30.105 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:30.105 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:30.363 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:30.363 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:30.363 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:30.363 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.363 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:30.363 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.363 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:30.363 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.363 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:30.363 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.363 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:30.363 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:30.363 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:30.363 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.363 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:30.363 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:30.363 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:30.363 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.363 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:30.363 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:30.363 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:30.363 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:30.363 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:30.363 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid=806d442c-08ba-4719-b76d-33169283459a -t tcp -a 10.0.0.3 -s 8009 -o json 00:12:30.363 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:30.363 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:30.622 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:30.622 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:30.622 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:30.622 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:30.622 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:30.622 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:30.622 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid=806d442c-08ba-4719-b76d-33169283459a -t tcp -a 10.0.0.3 -s 8009 -o json 00:12:30.622 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:30.622 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:30.622 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:30.622 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:30.622 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid=806d442c-08ba-4719-b76d-33169283459a -t tcp -a 10.0.0.3 -s 8009 -o json 00:12:30.622 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:30.880 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:30.880 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:30.880 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.880 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:30.880 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.880 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:30.880 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:30.880 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:30.880 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:30.880 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.880 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:30.880 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:30.880 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.880 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:30.880 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:30.880 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:30.880 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:30.880 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:30.880 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid=806d442c-08ba-4719-b76d-33169283459a -t tcp -a 10.0.0.3 -s 8009 -o json 00:12:30.880 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:30.880 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:30.880 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:30.880 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:30.880 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:30.880 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:30.880 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:30.880 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid=806d442c-08ba-4719-b76d-33169283459a -t tcp -a 10.0.0.3 -s 8009 -o json 00:12:30.880 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:31.139 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:31.139 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:31.139 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:31.139 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:31.139 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid=806d442c-08ba-4719-b76d-33169283459a -t tcp -a 10.0.0.3 -s 8009 -o json 00:12:31.139 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:31.139 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:31.139 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:31.139 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.139 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:31.139 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.139 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:31.139 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.139 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:31.139 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:31.139 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.398 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:31.398 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:31.398 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:31.398 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:31.398 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid=806d442c-08ba-4719-b76d-33169283459a -t tcp -a 10.0.0.3 -s 8009 -o json 00:12:31.398 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:31.398 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:31.398 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:31.398 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:31.398 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:31.398 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:31.398 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:31.398 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:31.398 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:31.398 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:31.398 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:31.398 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:31.398 rmmod nvme_tcp 00:12:31.398 rmmod nvme_fabrics 00:12:31.398 rmmod nvme_keyring 00:12:31.398 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:31.398 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:31.398 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:31.398 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 72443 ']' 00:12:31.398 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 72443 00:12:31.398 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 72443 ']' 00:12:31.398 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 72443 00:12:31.398 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:12:31.398 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:31.657 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72443 00:12:31.657 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:31.657 killing process with pid 72443 00:12:31.657 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:31.657 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72443' 00:12:31.657 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 72443 00:12:31.657 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 72443 00:12:31.657 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:31.657 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:31.657 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:31.657 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:12:31.657 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:12:31.657 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:31.657 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:12:31.657 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:31.657 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:31.657 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:31.657 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:31.657 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:31.657 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:31.658 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:31.658 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:31.658 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:31.916 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:31.916 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:31.916 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:31.916 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:31.916 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:31.916 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:31.916 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:31.916 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:31.916 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:31.916 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:31.916 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@300 -- # return 0 00:12:31.916 ************************************ 00:12:31.916 END TEST nvmf_referrals 00:12:31.916 ************************************ 00:12:31.916 00:12:31.916 real 0m3.174s 00:12:31.916 user 0m9.273s 00:12:31.916 sys 0m0.903s 00:12:31.916 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:31.916 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:31.916 11:25:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:31.916 11:25:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:31.916 11:25:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:31.916 11:25:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:31.916 ************************************ 00:12:31.916 START TEST nvmf_connect_disconnect 00:12:31.916 ************************************ 00:12:31.916 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:32.176 * Looking for test storage... 00:12:32.176 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:32.176 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:32.176 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:12:32.176 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:32.176 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:32.176 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:32.176 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:32.176 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:32.176 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:32.176 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:32.176 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:32.176 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:32.176 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:32.176 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:32.176 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:32.176 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:32.176 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:32.176 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:32.176 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:32.176 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:32.176 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:32.176 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:32.176 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:32.176 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:32.176 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:32.176 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:32.176 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:32.176 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:32.176 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:32.176 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:32.176 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:32.176 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:32.176 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:32.176 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:32.176 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:32.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.176 --rc genhtml_branch_coverage=1 00:12:32.176 --rc genhtml_function_coverage=1 00:12:32.176 --rc genhtml_legend=1 00:12:32.176 --rc geninfo_all_blocks=1 00:12:32.176 --rc geninfo_unexecuted_blocks=1 00:12:32.176 00:12:32.176 ' 00:12:32.176 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:32.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.176 --rc genhtml_branch_coverage=1 00:12:32.176 --rc genhtml_function_coverage=1 00:12:32.176 --rc genhtml_legend=1 00:12:32.176 --rc geninfo_all_blocks=1 00:12:32.176 --rc geninfo_unexecuted_blocks=1 00:12:32.176 00:12:32.176 ' 00:12:32.176 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:32.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.176 --rc genhtml_branch_coverage=1 00:12:32.176 --rc genhtml_function_coverage=1 00:12:32.176 --rc genhtml_legend=1 00:12:32.176 --rc geninfo_all_blocks=1 00:12:32.176 --rc geninfo_unexecuted_blocks=1 00:12:32.176 00:12:32.176 ' 00:12:32.176 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:32.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.176 --rc genhtml_branch_coverage=1 00:12:32.176 --rc genhtml_function_coverage=1 00:12:32.176 --rc genhtml_legend=1 00:12:32.176 --rc geninfo_all_blocks=1 00:12:32.176 --rc geninfo_unexecuted_blocks=1 00:12:32.176 00:12:32.176 ' 00:12:32.176 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:32.176 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:32.176 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:32.176 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:32.176 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:32.176 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:32.176 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:32.176 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:32.176 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:32.176 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:32.176 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:32.176 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:32.176 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:12:32.176 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=806d442c-08ba-4719-b76d-33169283459a 00:12:32.176 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:32.176 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:32.177 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:32.177 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:32.177 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:32.177 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:32.177 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:32.177 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:32.177 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:32.177 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.177 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.177 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.177 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:32.177 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.177 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:32.177 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:32.177 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:32.177 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:32.177 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:32.177 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:32.177 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:32.177 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:32.177 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:32.177 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:32.177 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:32.177 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:32.177 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:32.177 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:32.177 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:32.177 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:32.177 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:32.177 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:32.177 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:32.177 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:32.177 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:32.177 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:32.177 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:32.177 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:32.177 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:32.177 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:32.177 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:32.177 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:32.177 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:32.177 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:32.177 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:32.177 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:32.177 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:32.177 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:32.177 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:32.177 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:32.177 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:32.177 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:32.177 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:32.177 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:32.177 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:32.177 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:32.177 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:32.177 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:32.177 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:32.177 Cannot find device "nvmf_init_br" 00:12:32.177 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # true 00:12:32.177 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:32.177 Cannot find device "nvmf_init_br2" 00:12:32.177 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # true 00:12:32.177 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:32.177 Cannot find device "nvmf_tgt_br" 00:12:32.177 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@164 -- # true 00:12:32.177 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:32.177 Cannot find device "nvmf_tgt_br2" 00:12:32.177 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@165 -- # true 00:12:32.177 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:32.177 Cannot find device "nvmf_init_br" 00:12:32.177 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # true 00:12:32.177 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:32.177 Cannot find device "nvmf_init_br2" 00:12:32.177 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@167 -- # true 00:12:32.177 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:32.177 Cannot find device "nvmf_tgt_br" 00:12:32.177 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@168 -- # true 00:12:32.177 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:32.177 Cannot find device "nvmf_tgt_br2" 00:12:32.177 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # true 00:12:32.177 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:32.177 Cannot find device "nvmf_br" 00:12:32.177 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@170 -- # true 00:12:32.177 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:32.436 Cannot find device "nvmf_init_if" 00:12:32.436 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # true 00:12:32.436 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:32.436 Cannot find device "nvmf_init_if2" 00:12:32.436 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@172 -- # true 00:12:32.436 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:32.436 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:32.436 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@173 -- # true 00:12:32.436 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:32.436 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:32.436 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@174 -- # true 00:12:32.436 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:32.436 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:32.436 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:32.436 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:32.436 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:32.436 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:32.436 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:32.436 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:32.436 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:32.436 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:32.436 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:32.436 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:32.436 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:32.436 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:32.436 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:32.436 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:32.436 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:32.436 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:32.437 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:32.437 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:32.437 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:32.437 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:32.437 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:32.437 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:32.437 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:32.437 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:32.695 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:32.695 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:32.695 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:32.695 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:32.695 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:32.695 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:32.695 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:32.695 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:32.695 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:12:32.695 00:12:32.695 --- 10.0.0.3 ping statistics --- 00:12:32.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.695 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:12:32.695 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:32.695 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:32.695 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.058 ms 00:12:32.695 00:12:32.695 --- 10.0.0.4 ping statistics --- 00:12:32.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.695 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:12:32.695 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:32.695 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:32.695 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:12:32.695 00:12:32.695 --- 10.0.0.1 ping statistics --- 00:12:32.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.695 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:12:32.695 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:32.695 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:32.695 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:12:32.695 00:12:32.695 --- 10.0.0.2 ping statistics --- 00:12:32.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.695 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:12:32.695 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:32.695 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@461 -- # return 0 00:12:32.695 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:32.695 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:32.695 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:32.695 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:32.695 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:32.695 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:32.695 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:32.695 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:32.695 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:32.695 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:32.695 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:32.695 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=72787 00:12:32.695 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:32.695 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 72787 00:12:32.695 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 72787 ']' 00:12:32.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:32.695 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:32.695 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:32.695 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:32.695 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:32.695 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:32.695 [2024-11-20 11:25:18.173726] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:12:32.695 [2024-11-20 11:25:18.173848] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:32.954 [2024-11-20 11:25:18.334148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:32.954 [2024-11-20 11:25:18.394493] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:32.954 [2024-11-20 11:25:18.394908] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:32.954 [2024-11-20 11:25:18.395332] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:32.954 [2024-11-20 11:25:18.395693] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:32.954 [2024-11-20 11:25:18.395916] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:32.954 [2024-11-20 11:25:18.397562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:32.954 [2024-11-20 11:25:18.397666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:32.954 [2024-11-20 11:25:18.397982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:32.954 [2024-11-20 11:25:18.398004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.954 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:32.954 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:12:32.954 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:32.954 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:32.954 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:33.212 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:33.212 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:33.212 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.212 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:33.212 [2024-11-20 11:25:18.571496] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:33.212 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.212 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:33.212 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.212 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:33.212 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.212 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:33.212 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:33.212 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.212 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:33.212 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.212 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:33.212 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.212 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:33.212 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.212 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:33.212 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.212 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:33.212 [2024-11-20 11:25:18.643555] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:33.212 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.212 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:12:33.212 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:12:33.212 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:35.753 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.654 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.184 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:42.715 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.619 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.619 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:44.619 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:44.619 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:44.619 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:12:44.619 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:44.619 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:12:44.619 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:44.619 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:44.619 rmmod nvme_tcp 00:12:44.619 rmmod nvme_fabrics 00:12:44.619 rmmod nvme_keyring 00:12:44.619 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:44.619 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:12:44.619 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:12:44.619 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 72787 ']' 00:12:44.619 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 72787 00:12:44.619 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 72787 ']' 00:12:44.619 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 72787 00:12:44.619 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:12:44.619 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:44.619 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72787 00:12:44.905 killing process with pid 72787 00:12:44.905 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:44.905 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:44.905 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72787' 00:12:44.905 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 72787 00:12:44.905 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 72787 00:12:44.905 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:44.905 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:44.905 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:44.905 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:12:44.905 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:12:44.905 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:44.905 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:12:44.905 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:44.905 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:44.905 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:44.905 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:44.905 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:44.905 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:44.905 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:44.905 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:44.905 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:44.905 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:44.905 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:45.163 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:45.163 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:45.163 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:45.163 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:45.163 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:45.163 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:45.163 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:45.163 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:45.163 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@300 -- # return 0 00:12:45.163 ************************************ 00:12:45.163 END TEST nvmf_connect_disconnect 00:12:45.163 ************************************ 00:12:45.163 00:12:45.163 real 0m13.184s 00:12:45.163 user 0m46.793s 00:12:45.163 sys 0m2.021s 00:12:45.163 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:45.163 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:45.163 11:25:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:45.163 11:25:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:45.163 11:25:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:45.163 11:25:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:45.163 ************************************ 00:12:45.163 START TEST nvmf_multitarget 00:12:45.163 ************************************ 00:12:45.163 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:45.163 * Looking for test storage... 00:12:45.163 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:45.163 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:45.163 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:12:45.163 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:45.422 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:45.422 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:45.422 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:45.422 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:45.422 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:12:45.422 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:12:45.422 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:12:45.422 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:12:45.422 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:12:45.422 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:12:45.422 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:12:45.422 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:45.422 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:12:45.422 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:12:45.422 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:45.422 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:45.422 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:12:45.422 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:12:45.422 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:45.422 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:12:45.422 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:12:45.422 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:12:45.422 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:12:45.422 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:45.422 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:12:45.422 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:12:45.422 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:45.422 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:45.422 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:12:45.422 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:45.422 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:45.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.422 --rc genhtml_branch_coverage=1 00:12:45.422 --rc genhtml_function_coverage=1 00:12:45.422 --rc genhtml_legend=1 00:12:45.422 --rc geninfo_all_blocks=1 00:12:45.422 --rc geninfo_unexecuted_blocks=1 00:12:45.422 00:12:45.422 ' 00:12:45.422 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:45.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.422 --rc genhtml_branch_coverage=1 00:12:45.422 --rc genhtml_function_coverage=1 00:12:45.422 --rc genhtml_legend=1 00:12:45.422 --rc geninfo_all_blocks=1 00:12:45.422 --rc geninfo_unexecuted_blocks=1 00:12:45.422 00:12:45.422 ' 00:12:45.422 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:45.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.422 --rc genhtml_branch_coverage=1 00:12:45.422 --rc genhtml_function_coverage=1 00:12:45.422 --rc genhtml_legend=1 00:12:45.422 --rc geninfo_all_blocks=1 00:12:45.422 --rc geninfo_unexecuted_blocks=1 00:12:45.422 00:12:45.422 ' 00:12:45.422 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:45.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.422 --rc genhtml_branch_coverage=1 00:12:45.422 --rc genhtml_function_coverage=1 00:12:45.422 --rc genhtml_legend=1 00:12:45.422 --rc geninfo_all_blocks=1 00:12:45.422 --rc geninfo_unexecuted_blocks=1 00:12:45.422 00:12:45.422 ' 00:12:45.422 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:45.422 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:45.422 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:45.422 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:45.422 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:45.422 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:45.422 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:45.422 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:45.422 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:45.422 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:45.422 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:45.422 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:45.422 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:12:45.422 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=806d442c-08ba-4719-b76d-33169283459a 00:12:45.422 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:45.422 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:45.422 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:45.422 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:45.422 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:45.422 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:12:45.422 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:45.422 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:45.422 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:45.422 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.422 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.423 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.423 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:45.423 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.423 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:12:45.423 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:45.423 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:45.423 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:45.423 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:45.423 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:45.423 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:45.423 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:45.423 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:45.423 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:45.423 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:45.423 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:12:45.423 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:45.423 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:45.423 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:45.423 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:45.423 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:45.423 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:45.423 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:45.423 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:45.423 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:45.423 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:45.423 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:45.423 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:45.423 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:45.423 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:45.423 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:45.423 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:45.423 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:45.423 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:45.423 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:45.423 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:45.423 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:45.423 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:45.423 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:45.423 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:45.423 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:45.423 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:45.423 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:45.423 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:45.423 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:45.423 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:45.423 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:45.423 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:45.423 Cannot find device "nvmf_init_br" 00:12:45.423 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@162 -- # true 00:12:45.423 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:45.423 Cannot find device "nvmf_init_br2" 00:12:45.423 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@163 -- # true 00:12:45.423 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:45.423 Cannot find device "nvmf_tgt_br" 00:12:45.423 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@164 -- # true 00:12:45.423 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:45.423 Cannot find device "nvmf_tgt_br2" 00:12:45.423 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@165 -- # true 00:12:45.423 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:45.423 Cannot find device "nvmf_init_br" 00:12:45.423 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@166 -- # true 00:12:45.423 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:45.423 Cannot find device "nvmf_init_br2" 00:12:45.423 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@167 -- # true 00:12:45.423 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:45.423 Cannot find device "nvmf_tgt_br" 00:12:45.423 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@168 -- # true 00:12:45.423 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:45.423 Cannot find device "nvmf_tgt_br2" 00:12:45.423 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@169 -- # true 00:12:45.423 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:45.423 Cannot find device "nvmf_br" 00:12:45.423 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@170 -- # true 00:12:45.423 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:45.423 Cannot find device "nvmf_init_if" 00:12:45.423 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@171 -- # true 00:12:45.423 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:45.423 Cannot find device "nvmf_init_if2" 00:12:45.423 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@172 -- # true 00:12:45.423 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:45.423 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:45.423 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@173 -- # true 00:12:45.423 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:45.423 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:45.423 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@174 -- # true 00:12:45.423 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:45.423 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:45.423 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:45.423 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:45.682 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:45.682 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:45.682 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:45.682 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:45.682 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:45.682 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:45.682 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:45.682 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:45.682 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:45.682 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:45.682 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:45.682 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:45.682 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:45.682 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:45.682 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:45.682 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:45.682 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:45.682 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:45.682 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:45.682 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:45.682 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:45.682 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:45.682 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:45.682 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:45.682 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:45.682 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:45.682 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:45.682 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:45.682 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:45.682 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:45.682 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 00:12:45.682 00:12:45.682 --- 10.0.0.3 ping statistics --- 00:12:45.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:45.682 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:12:45.682 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:45.682 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:45.682 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.063 ms 00:12:45.682 00:12:45.682 --- 10.0.0.4 ping statistics --- 00:12:45.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:45.682 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:12:45.682 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:45.941 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:45.941 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:12:45.941 00:12:45.941 --- 10.0.0.1 ping statistics --- 00:12:45.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:45.941 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:12:45.941 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:45.941 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:45.941 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:12:45.941 00:12:45.941 --- 10.0.0.2 ping statistics --- 00:12:45.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:45.941 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:12:45.941 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:45.941 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@461 -- # return 0 00:12:45.941 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:45.941 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:45.941 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:45.941 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:45.941 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:45.941 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:45.941 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:45.941 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:45.941 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:45.941 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:45.941 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:45.941 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=73228 00:12:45.941 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:45.941 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 73228 00:12:45.941 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 73228 ']' 00:12:45.941 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:45.941 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:45.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:45.941 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:45.941 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:45.941 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:45.941 [2024-11-20 11:25:31.376132] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:12:45.941 [2024-11-20 11:25:31.376261] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:46.199 [2024-11-20 11:25:31.527806] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:46.199 [2024-11-20 11:25:31.567309] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:46.199 [2024-11-20 11:25:31.567372] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:46.199 [2024-11-20 11:25:31.567387] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:46.199 [2024-11-20 11:25:31.567397] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:46.199 [2024-11-20 11:25:31.567407] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:46.199 [2024-11-20 11:25:31.568264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:46.199 [2024-11-20 11:25:31.568394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:46.199 [2024-11-20 11:25:31.568543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:46.199 [2024-11-20 11:25:31.568549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.199 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:46.199 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:12:46.199 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:46.199 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:46.199 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:46.199 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:46.199 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:46.199 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:46.199 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:46.457 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:46.457 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:46.457 "nvmf_tgt_1" 00:12:46.457 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:46.715 "nvmf_tgt_2" 00:12:46.715 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:46.715 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:46.715 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:46.715 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:46.973 true 00:12:46.973 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:47.232 true 00:12:47.232 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:47.232 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:47.232 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:47.232 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:47.232 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:47.232 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:47.232 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:12:47.232 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:47.232 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:12:47.232 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:47.232 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:47.232 rmmod nvme_tcp 00:12:47.232 rmmod nvme_fabrics 00:12:47.490 rmmod nvme_keyring 00:12:47.490 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:47.490 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:12:47.490 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:12:47.490 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 73228 ']' 00:12:47.490 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 73228 00:12:47.490 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 73228 ']' 00:12:47.491 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 73228 00:12:47.491 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:12:47.491 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:47.491 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73228 00:12:47.491 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:47.491 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:47.491 killing process with pid 73228 00:12:47.491 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73228' 00:12:47.491 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 73228 00:12:47.491 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 73228 00:12:47.491 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:47.491 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:47.491 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:47.491 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:12:47.491 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:12:47.491 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:12:47.491 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:47.491 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:47.491 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:47.491 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:47.491 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:47.749 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:47.749 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:47.749 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:47.749 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:47.749 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:47.749 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:47.749 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:47.749 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:47.749 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:47.749 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:47.749 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:47.749 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:47.749 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:47.749 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:47.749 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:47.749 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@300 -- # return 0 00:12:47.749 00:12:47.749 real 0m2.615s 00:12:47.749 user 0m7.177s 00:12:47.749 sys 0m0.758s 00:12:47.749 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:47.749 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:47.749 ************************************ 00:12:47.749 END TEST nvmf_multitarget 00:12:47.749 ************************************ 00:12:47.749 11:25:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:47.749 11:25:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:47.749 11:25:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:47.749 11:25:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:48.008 ************************************ 00:12:48.008 START TEST nvmf_rpc 00:12:48.008 ************************************ 00:12:48.008 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:48.008 * Looking for test storage... 00:12:48.008 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:48.008 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:48.008 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:48.008 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:12:48.008 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:48.008 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:48.008 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:48.008 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:48.008 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:12:48.008 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:12:48.008 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:12:48.008 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:12:48.008 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:12:48.008 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:12:48.008 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:12:48.008 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:48.008 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:12:48.008 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:12:48.008 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:48.008 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:48.008 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:12:48.009 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:12:48.009 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:48.009 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:12:48.009 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:12:48.009 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:12:48.009 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:12:48.009 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:48.009 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:12:48.009 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:12:48.009 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:48.009 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:48.009 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:12:48.009 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:48.009 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:48.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:48.009 --rc genhtml_branch_coverage=1 00:12:48.009 --rc genhtml_function_coverage=1 00:12:48.009 --rc genhtml_legend=1 00:12:48.009 --rc geninfo_all_blocks=1 00:12:48.009 --rc geninfo_unexecuted_blocks=1 00:12:48.009 00:12:48.009 ' 00:12:48.009 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:48.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:48.009 --rc genhtml_branch_coverage=1 00:12:48.009 --rc genhtml_function_coverage=1 00:12:48.009 --rc genhtml_legend=1 00:12:48.009 --rc geninfo_all_blocks=1 00:12:48.009 --rc geninfo_unexecuted_blocks=1 00:12:48.009 00:12:48.009 ' 00:12:48.009 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:48.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:48.009 --rc genhtml_branch_coverage=1 00:12:48.009 --rc genhtml_function_coverage=1 00:12:48.009 --rc genhtml_legend=1 00:12:48.009 --rc geninfo_all_blocks=1 00:12:48.009 --rc geninfo_unexecuted_blocks=1 00:12:48.009 00:12:48.009 ' 00:12:48.009 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:48.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:48.009 --rc genhtml_branch_coverage=1 00:12:48.009 --rc genhtml_function_coverage=1 00:12:48.009 --rc genhtml_legend=1 00:12:48.009 --rc geninfo_all_blocks=1 00:12:48.009 --rc geninfo_unexecuted_blocks=1 00:12:48.009 00:12:48.009 ' 00:12:48.009 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:48.009 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:48.009 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:48.009 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:48.009 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:48.009 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:48.009 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:48.009 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:48.009 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:48.009 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:48.009 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:48.009 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:48.009 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:12:48.009 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=806d442c-08ba-4719-b76d-33169283459a 00:12:48.009 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:48.009 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:48.009 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:48.009 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:48.009 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:48.009 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:12:48.009 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:48.009 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:48.009 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:48.009 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.009 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.009 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.009 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:48.009 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.009 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:12:48.009 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:48.009 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:48.009 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:48.009 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:48.009 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:48.009 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:48.009 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:48.009 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:48.009 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:48.009 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:48.009 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:48.009 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:48.009 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:48.009 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:48.009 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:48.009 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:48.010 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:48.010 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:48.010 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:48.010 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:48.010 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:48.010 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:48.010 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:48.010 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:48.010 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:48.010 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:48.010 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:48.010 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:48.010 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:48.010 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:48.010 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:48.010 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:48.010 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:48.010 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:48.010 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:48.010 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:48.010 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:48.010 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:48.010 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:48.010 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:48.010 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:48.010 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:48.010 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:48.010 Cannot find device "nvmf_init_br" 00:12:48.010 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@162 -- # true 00:12:48.010 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:48.010 Cannot find device "nvmf_init_br2" 00:12:48.010 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@163 -- # true 00:12:48.010 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:48.010 Cannot find device "nvmf_tgt_br" 00:12:48.010 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@164 -- # true 00:12:48.010 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:48.269 Cannot find device "nvmf_tgt_br2" 00:12:48.269 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@165 -- # true 00:12:48.269 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:48.269 Cannot find device "nvmf_init_br" 00:12:48.269 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@166 -- # true 00:12:48.269 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:48.269 Cannot find device "nvmf_init_br2" 00:12:48.269 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@167 -- # true 00:12:48.269 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:48.269 Cannot find device "nvmf_tgt_br" 00:12:48.269 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@168 -- # true 00:12:48.269 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:48.269 Cannot find device "nvmf_tgt_br2" 00:12:48.269 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@169 -- # true 00:12:48.269 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:48.269 Cannot find device "nvmf_br" 00:12:48.269 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@170 -- # true 00:12:48.269 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:48.269 Cannot find device "nvmf_init_if" 00:12:48.269 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@171 -- # true 00:12:48.269 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:48.269 Cannot find device "nvmf_init_if2" 00:12:48.269 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@172 -- # true 00:12:48.269 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:48.269 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:48.269 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@173 -- # true 00:12:48.269 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:48.269 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:48.269 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@174 -- # true 00:12:48.269 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:48.269 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:48.269 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:48.269 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:48.269 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:48.269 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:48.269 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:48.269 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:48.269 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:48.269 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:48.269 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:48.269 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:48.269 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:48.269 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:48.269 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:48.269 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:48.269 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:48.269 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:48.269 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:48.269 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:48.269 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:48.270 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:48.270 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:48.528 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:48.528 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:48.528 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:48.528 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:48.528 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:48.528 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:48.528 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:48.528 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:48.528 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:48.528 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:48.528 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:48.528 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:12:48.528 00:12:48.528 --- 10.0.0.3 ping statistics --- 00:12:48.528 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:48.528 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:12:48.528 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:48.528 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:48.528 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:12:48.528 00:12:48.528 --- 10.0.0.4 ping statistics --- 00:12:48.528 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:48.528 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:12:48.528 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:48.528 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:48.528 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:12:48.528 00:12:48.529 --- 10.0.0.1 ping statistics --- 00:12:48.529 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:48.529 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:12:48.529 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:48.529 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:48.529 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:12:48.529 00:12:48.529 --- 10.0.0.2 ping statistics --- 00:12:48.529 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:48.529 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:12:48.529 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:48.529 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@461 -- # return 0 00:12:48.529 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:48.529 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:48.529 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:48.529 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:48.529 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:48.529 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:48.529 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:48.529 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:48.529 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:48.529 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:48.529 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.529 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=73496 00:12:48.529 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:48.529 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 73496 00:12:48.529 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 73496 ']' 00:12:48.529 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:48.529 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:48.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:48.529 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:48.529 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:48.529 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.529 [2024-11-20 11:25:34.030745] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:12:48.529 [2024-11-20 11:25:34.030866] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:48.788 [2024-11-20 11:25:34.174667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:48.788 [2024-11-20 11:25:34.209431] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:48.788 [2024-11-20 11:25:34.209514] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:48.788 [2024-11-20 11:25:34.209526] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:48.788 [2024-11-20 11:25:34.209535] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:48.788 [2024-11-20 11:25:34.209542] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:48.788 [2024-11-20 11:25:34.210414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:48.788 [2024-11-20 11:25:34.210504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:48.788 [2024-11-20 11:25:34.210693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:48.788 [2024-11-20 11:25:34.210694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:48.788 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:48.788 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:48.788 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:48.788 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:48.788 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.788 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:48.788 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:48.788 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.788 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.048 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.048 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:49.048 "poll_groups": [ 00:12:49.048 { 00:12:49.048 "admin_qpairs": 0, 00:12:49.048 "completed_nvme_io": 0, 00:12:49.048 "current_admin_qpairs": 0, 00:12:49.048 "current_io_qpairs": 0, 00:12:49.048 "io_qpairs": 0, 00:12:49.048 "name": "nvmf_tgt_poll_group_000", 00:12:49.048 "pending_bdev_io": 0, 00:12:49.048 "transports": [] 00:12:49.048 }, 00:12:49.048 { 00:12:49.048 "admin_qpairs": 0, 00:12:49.048 "completed_nvme_io": 0, 00:12:49.048 "current_admin_qpairs": 0, 00:12:49.048 "current_io_qpairs": 0, 00:12:49.048 "io_qpairs": 0, 00:12:49.048 "name": "nvmf_tgt_poll_group_001", 00:12:49.048 "pending_bdev_io": 0, 00:12:49.048 "transports": [] 00:12:49.048 }, 00:12:49.048 { 00:12:49.048 "admin_qpairs": 0, 00:12:49.048 "completed_nvme_io": 0, 00:12:49.048 "current_admin_qpairs": 0, 00:12:49.048 "current_io_qpairs": 0, 00:12:49.048 "io_qpairs": 0, 00:12:49.048 "name": "nvmf_tgt_poll_group_002", 00:12:49.048 "pending_bdev_io": 0, 00:12:49.048 "transports": [] 00:12:49.048 }, 00:12:49.048 { 00:12:49.048 "admin_qpairs": 0, 00:12:49.048 "completed_nvme_io": 0, 00:12:49.048 "current_admin_qpairs": 0, 00:12:49.048 "current_io_qpairs": 0, 00:12:49.048 "io_qpairs": 0, 00:12:49.048 "name": "nvmf_tgt_poll_group_003", 00:12:49.048 "pending_bdev_io": 0, 00:12:49.048 "transports": [] 00:12:49.048 } 00:12:49.048 ], 00:12:49.048 "tick_rate": 2200000000 00:12:49.048 }' 00:12:49.048 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:49.048 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:49.048 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:49.048 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:49.048 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:49.048 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:49.048 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:49.048 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:49.048 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.048 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.048 [2024-11-20 11:25:34.479472] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:49.048 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.048 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:49.048 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.048 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.048 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.048 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:49.048 "poll_groups": [ 00:12:49.048 { 00:12:49.048 "admin_qpairs": 0, 00:12:49.048 "completed_nvme_io": 0, 00:12:49.048 "current_admin_qpairs": 0, 00:12:49.048 "current_io_qpairs": 0, 00:12:49.048 "io_qpairs": 0, 00:12:49.048 "name": "nvmf_tgt_poll_group_000", 00:12:49.048 "pending_bdev_io": 0, 00:12:49.048 "transports": [ 00:12:49.048 { 00:12:49.048 "trtype": "TCP" 00:12:49.048 } 00:12:49.048 ] 00:12:49.048 }, 00:12:49.048 { 00:12:49.048 "admin_qpairs": 0, 00:12:49.048 "completed_nvme_io": 0, 00:12:49.048 "current_admin_qpairs": 0, 00:12:49.048 "current_io_qpairs": 0, 00:12:49.048 "io_qpairs": 0, 00:12:49.048 "name": "nvmf_tgt_poll_group_001", 00:12:49.048 "pending_bdev_io": 0, 00:12:49.048 "transports": [ 00:12:49.048 { 00:12:49.048 "trtype": "TCP" 00:12:49.048 } 00:12:49.048 ] 00:12:49.048 }, 00:12:49.048 { 00:12:49.048 "admin_qpairs": 0, 00:12:49.048 "completed_nvme_io": 0, 00:12:49.048 "current_admin_qpairs": 0, 00:12:49.048 "current_io_qpairs": 0, 00:12:49.048 "io_qpairs": 0, 00:12:49.048 "name": "nvmf_tgt_poll_group_002", 00:12:49.048 "pending_bdev_io": 0, 00:12:49.048 "transports": [ 00:12:49.048 { 00:12:49.048 "trtype": "TCP" 00:12:49.048 } 00:12:49.048 ] 00:12:49.048 }, 00:12:49.048 { 00:12:49.048 "admin_qpairs": 0, 00:12:49.048 "completed_nvme_io": 0, 00:12:49.048 "current_admin_qpairs": 0, 00:12:49.048 "current_io_qpairs": 0, 00:12:49.048 "io_qpairs": 0, 00:12:49.048 "name": "nvmf_tgt_poll_group_003", 00:12:49.048 "pending_bdev_io": 0, 00:12:49.048 "transports": [ 00:12:49.048 { 00:12:49.048 "trtype": "TCP" 00:12:49.048 } 00:12:49.048 ] 00:12:49.048 } 00:12:49.048 ], 00:12:49.048 "tick_rate": 2200000000 00:12:49.048 }' 00:12:49.048 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:49.048 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:49.048 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:49.048 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:49.048 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:49.048 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:49.048 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:49.048 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:49.048 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:49.048 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:49.048 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:49.048 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:49.048 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:49.048 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:49.048 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.048 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.361 Malloc1 00:12:49.361 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.361 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:49.361 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.361 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.361 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.361 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:49.361 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.361 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.361 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.361 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:49.361 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.361 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.361 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.361 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:49.361 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.361 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.361 [2024-11-20 11:25:34.665995] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:49.361 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.361 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid=806d442c-08ba-4719-b76d-33169283459a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -a 10.0.0.3 -s 4420 00:12:49.361 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:12:49.361 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid=806d442c-08ba-4719-b76d-33169283459a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -a 10.0.0.3 -s 4420 00:12:49.361 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:12:49.361 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:49.361 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:12:49.361 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:49.361 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:12:49.361 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:49.361 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:12:49.361 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:12:49.361 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid=806d442c-08ba-4719-b76d-33169283459a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -a 10.0.0.3 -s 4420 00:12:49.361 [2024-11-20 11:25:34.698439] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a' 00:12:49.361 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:49.361 could not add new controller: failed to write to nvme-fabrics device 00:12:49.361 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:12:49.361 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:49.361 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:49.361 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:49.361 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:12:49.361 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.361 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.361 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.361 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid=806d442c-08ba-4719-b76d-33169283459a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:12:49.361 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:49.361 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:49.361 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:49.361 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:49.361 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:51.315 11:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:51.315 11:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:51.315 11:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:51.573 11:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:51.573 11:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:51.573 11:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:51.573 11:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:51.573 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.573 11:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:51.573 11:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:51.573 11:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:51.573 11:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:51.573 11:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:51.573 11:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:51.573 11:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:51.573 11:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:12:51.573 11:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.573 11:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.573 11:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.573 11:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid=806d442c-08ba-4719-b76d-33169283459a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:12:51.573 11:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:12:51.573 11:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid=806d442c-08ba-4719-b76d-33169283459a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:12:51.573 11:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:12:51.573 11:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:51.573 11:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:12:51.573 11:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:51.573 11:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:12:51.573 11:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:51.573 11:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:12:51.573 11:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:12:51.573 11:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid=806d442c-08ba-4719-b76d-33169283459a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:12:51.573 [2024-11-20 11:25:36.999705] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a' 00:12:51.573 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:51.573 could not add new controller: failed to write to nvme-fabrics device 00:12:51.573 11:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:12:51.573 11:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:51.573 11:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:51.573 11:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:51.573 11:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:51.573 11:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.573 11:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.573 11:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.573 11:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid=806d442c-08ba-4719-b76d-33169283459a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:12:51.832 11:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:51.832 11:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:51.832 11:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:51.832 11:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:51.832 11:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:53.735 11:25:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:53.735 11:25:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:53.735 11:25:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:53.735 11:25:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:53.735 11:25:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:53.735 11:25:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:53.735 11:25:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:53.736 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:53.736 11:25:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:53.736 11:25:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:53.736 11:25:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:53.736 11:25:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:53.736 11:25:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:53.736 11:25:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:53.736 11:25:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:53.736 11:25:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:53.736 11:25:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.736 11:25:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.736 11:25:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.736 11:25:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:53.736 11:25:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:53.736 11:25:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:53.736 11:25:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.736 11:25:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.736 11:25:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.736 11:25:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:53.736 11:25:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.736 11:25:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.736 [2024-11-20 11:25:39.284709] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:53.736 11:25:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.736 11:25:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:53.736 11:25:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.736 11:25:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.736 11:25:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.736 11:25:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:53.736 11:25:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.736 11:25:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.736 11:25:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.736 11:25:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid=806d442c-08ba-4719-b76d-33169283459a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:12:53.995 11:25:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:53.995 11:25:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:53.995 11:25:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:53.995 11:25:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:53.995 11:25:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:55.938 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:55.938 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:55.938 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:55.938 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:55.938 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:55.938 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:55.938 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:56.196 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:56.196 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:56.196 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:56.196 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:56.196 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:56.196 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:56.196 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:56.196 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:56.196 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:56.196 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.196 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.196 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.196 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:56.196 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.196 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.196 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.196 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:56.196 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:56.196 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.196 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.196 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.196 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:56.196 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.196 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.196 [2024-11-20 11:25:41.579707] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:56.196 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.196 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:56.196 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.196 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.196 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.196 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:56.196 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.196 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.196 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.196 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid=806d442c-08ba-4719-b76d-33169283459a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:12:56.196 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:56.196 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:56.196 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:56.196 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:56.196 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:58.725 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:58.725 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:58.725 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:58.725 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:58.725 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:58.725 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:58.725 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:58.725 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:58.725 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:58.725 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:58.725 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:58.725 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:58.725 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:58.725 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:58.725 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:58.725 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:58.725 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.725 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.725 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.725 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:58.725 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.725 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.725 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.725 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:58.725 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:58.725 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.725 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.725 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.725 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:58.725 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.725 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.725 [2024-11-20 11:25:43.863749] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:58.725 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.725 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:58.725 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.725 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.725 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.725 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:58.725 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.725 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.725 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.725 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid=806d442c-08ba-4719-b76d-33169283459a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:12:58.725 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:58.725 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:58.725 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:58.725 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:58.725 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:00.628 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:00.628 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:00.628 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:00.628 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:00.628 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:00.628 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:00.628 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:00.628 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:00.628 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:00.628 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:00.628 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:00.628 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:00.628 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:00.628 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:00.628 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:00.628 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:00.628 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.628 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.628 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.628 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:00.628 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.628 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.628 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.628 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:00.628 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:00.628 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.628 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.628 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.628 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:00.628 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.628 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.628 [2024-11-20 11:25:46.158988] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:00.628 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.628 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:00.628 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.628 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.628 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.628 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:00.628 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.628 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.628 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.629 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid=806d442c-08ba-4719-b76d-33169283459a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:13:00.887 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:00.887 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:00.887 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:00.887 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:00.887 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:02.788 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:02.788 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:02.788 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:02.788 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:02.788 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:02.788 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:02.788 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:03.047 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:03.047 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:03.047 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:03.047 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:03.047 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:03.047 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:03.047 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:03.047 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:03.047 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:03.047 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.047 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.047 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.047 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:03.047 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.047 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.047 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.047 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:03.047 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:03.047 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.047 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.047 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.047 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:03.047 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.047 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.047 [2024-11-20 11:25:48.458483] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:03.047 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.047 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:03.047 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.047 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.047 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.047 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:03.047 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.047 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.047 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.047 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid=806d442c-08ba-4719-b76d-33169283459a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:13:03.305 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:03.305 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:03.305 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:03.305 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:03.305 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:05.206 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:05.206 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:05.206 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:05.206 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:05.206 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:05.206 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:05.206 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:05.206 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:05.206 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:05.206 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:05.206 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:05.206 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:05.206 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:05.206 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:05.206 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:05.206 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:05.206 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.206 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.206 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.206 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:05.206 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.206 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.206 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.206 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:05.206 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:05.206 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:05.206 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.206 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.206 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.206 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:05.206 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.206 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.206 [2024-11-20 11:25:50.758145] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:05.206 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.206 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:05.206 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.206 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.206 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.206 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:05.206 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.206 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.206 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.206 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:05.206 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.206 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.206 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.206 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:05.206 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.206 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.466 [2024-11-20 11:25:50.806244] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.466 [2024-11-20 11:25:50.854281] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.466 [2024-11-20 11:25:50.902287] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.466 [2024-11-20 11:25:50.950331] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.466 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:05.467 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.467 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.467 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.467 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:05.467 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.467 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.467 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.467 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:05.467 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.467 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.467 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.467 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:05.467 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.467 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.467 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.467 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:05.467 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.467 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.467 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.467 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:05.467 "poll_groups": [ 00:13:05.467 { 00:13:05.467 "admin_qpairs": 2, 00:13:05.467 "completed_nvme_io": 116, 00:13:05.467 "current_admin_qpairs": 0, 00:13:05.467 "current_io_qpairs": 0, 00:13:05.467 "io_qpairs": 16, 00:13:05.467 "name": "nvmf_tgt_poll_group_000", 00:13:05.467 "pending_bdev_io": 0, 00:13:05.467 "transports": [ 00:13:05.467 { 00:13:05.467 "trtype": "TCP" 00:13:05.467 } 00:13:05.467 ] 00:13:05.467 }, 00:13:05.467 { 00:13:05.467 "admin_qpairs": 3, 00:13:05.467 "completed_nvme_io": 67, 00:13:05.467 "current_admin_qpairs": 0, 00:13:05.467 "current_io_qpairs": 0, 00:13:05.467 "io_qpairs": 17, 00:13:05.467 "name": "nvmf_tgt_poll_group_001", 00:13:05.467 "pending_bdev_io": 0, 00:13:05.467 "transports": [ 00:13:05.467 { 00:13:05.467 "trtype": "TCP" 00:13:05.467 } 00:13:05.467 ] 00:13:05.467 }, 00:13:05.467 { 00:13:05.467 "admin_qpairs": 1, 00:13:05.467 "completed_nvme_io": 70, 00:13:05.467 "current_admin_qpairs": 0, 00:13:05.467 "current_io_qpairs": 0, 00:13:05.467 "io_qpairs": 19, 00:13:05.467 "name": "nvmf_tgt_poll_group_002", 00:13:05.467 "pending_bdev_io": 0, 00:13:05.467 "transports": [ 00:13:05.467 { 00:13:05.467 "trtype": "TCP" 00:13:05.467 } 00:13:05.467 ] 00:13:05.467 }, 00:13:05.467 { 00:13:05.467 "admin_qpairs": 1, 00:13:05.467 "completed_nvme_io": 167, 00:13:05.467 "current_admin_qpairs": 0, 00:13:05.467 "current_io_qpairs": 0, 00:13:05.467 "io_qpairs": 18, 00:13:05.467 "name": "nvmf_tgt_poll_group_003", 00:13:05.467 "pending_bdev_io": 0, 00:13:05.467 "transports": [ 00:13:05.467 { 00:13:05.467 "trtype": "TCP" 00:13:05.467 } 00:13:05.467 ] 00:13:05.467 } 00:13:05.467 ], 00:13:05.467 "tick_rate": 2200000000 00:13:05.467 }' 00:13:05.467 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:05.467 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:05.467 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:05.467 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:05.777 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:05.777 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:05.777 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:05.777 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:05.777 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:05.777 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:13:05.777 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:05.777 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:05.777 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:05.777 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:05.777 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:13:05.777 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:05.777 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:13:05.777 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:05.777 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:05.777 rmmod nvme_tcp 00:13:05.777 rmmod nvme_fabrics 00:13:05.777 rmmod nvme_keyring 00:13:05.777 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:05.777 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:13:05.777 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:13:05.777 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 73496 ']' 00:13:05.777 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 73496 00:13:05.777 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 73496 ']' 00:13:05.777 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 73496 00:13:05.777 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:13:05.777 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:05.777 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73496 00:13:05.777 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:05.777 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:05.777 killing process with pid 73496 00:13:05.777 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73496' 00:13:05.777 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 73496 00:13:05.777 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 73496 00:13:06.038 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:06.038 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:06.038 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:06.038 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:13:06.038 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:13:06.038 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:06.038 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:13:06.038 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:06.038 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:06.038 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:06.038 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:06.038 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:06.038 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:06.038 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:06.038 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:06.038 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:06.038 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:06.038 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:06.038 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:06.038 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:06.038 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:06.038 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:06.038 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:06.038 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:06.038 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:06.038 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:06.299 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@300 -- # return 0 00:13:06.299 00:13:06.299 real 0m18.296s 00:13:06.299 user 1m7.093s 00:13:06.299 sys 0m2.712s 00:13:06.299 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:06.299 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.299 ************************************ 00:13:06.299 END TEST nvmf_rpc 00:13:06.299 ************************************ 00:13:06.299 11:25:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:06.299 11:25:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:06.299 11:25:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:06.299 11:25:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:06.299 ************************************ 00:13:06.299 START TEST nvmf_invalid 00:13:06.299 ************************************ 00:13:06.299 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:06.299 * Looking for test storage... 00:13:06.299 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:06.299 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:06.299 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:13:06.299 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:06.299 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:06.299 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:06.299 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:06.299 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:06.299 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:13:06.299 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:13:06.299 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:13:06.299 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:13:06.299 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:13:06.299 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:13:06.299 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:13:06.299 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:06.299 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:13:06.299 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:13:06.299 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:06.299 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:06.299 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:13:06.299 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:13:06.299 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:06.299 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:13:06.299 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:13:06.299 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:13:06.299 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:13:06.299 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:06.299 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:13:06.299 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:13:06.299 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:06.299 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:06.299 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:13:06.299 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:06.299 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:06.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:06.299 --rc genhtml_branch_coverage=1 00:13:06.299 --rc genhtml_function_coverage=1 00:13:06.299 --rc genhtml_legend=1 00:13:06.299 --rc geninfo_all_blocks=1 00:13:06.299 --rc geninfo_unexecuted_blocks=1 00:13:06.299 00:13:06.299 ' 00:13:06.299 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:06.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:06.299 --rc genhtml_branch_coverage=1 00:13:06.299 --rc genhtml_function_coverage=1 00:13:06.299 --rc genhtml_legend=1 00:13:06.299 --rc geninfo_all_blocks=1 00:13:06.299 --rc geninfo_unexecuted_blocks=1 00:13:06.299 00:13:06.299 ' 00:13:06.299 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:06.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:06.299 --rc genhtml_branch_coverage=1 00:13:06.299 --rc genhtml_function_coverage=1 00:13:06.299 --rc genhtml_legend=1 00:13:06.299 --rc geninfo_all_blocks=1 00:13:06.299 --rc geninfo_unexecuted_blocks=1 00:13:06.299 00:13:06.299 ' 00:13:06.299 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:06.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:06.299 --rc genhtml_branch_coverage=1 00:13:06.299 --rc genhtml_function_coverage=1 00:13:06.299 --rc genhtml_legend=1 00:13:06.299 --rc geninfo_all_blocks=1 00:13:06.299 --rc geninfo_unexecuted_blocks=1 00:13:06.299 00:13:06.299 ' 00:13:06.299 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:06.299 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:06.299 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:06.299 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:06.299 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:06.299 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:06.299 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:06.299 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:06.299 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:06.299 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:06.299 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:06.299 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:06.299 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:13:06.299 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=806d442c-08ba-4719-b76d-33169283459a 00:13:06.300 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:06.300 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:06.300 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:06.300 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:06.300 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:06.300 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:13:06.300 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:06.300 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:06.300 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:06.300 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.300 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.300 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.300 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:06.300 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.300 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:13:06.300 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:06.300 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:06.300 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:06.300 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:06.300 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:06.300 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:06.300 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:06.300 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:06.300 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:06.300 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:06.300 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:13:06.300 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:06.300 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:06.300 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:06.300 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:06.300 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:06.300 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:06.300 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:06.300 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:06.300 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:06.300 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:06.300 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:06.300 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:06.300 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:06.300 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:06.300 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:06.300 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:06.300 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:06.300 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:06.300 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:06.300 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:06.300 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:06.300 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:06.300 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:06.300 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:06.300 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:06.300 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:06.300 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:06.300 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:06.300 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:06.300 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:06.300 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:06.300 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:06.300 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:06.300 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:06.300 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:06.300 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:06.300 Cannot find device "nvmf_init_br" 00:13:06.300 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@162 -- # true 00:13:06.300 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:06.300 Cannot find device "nvmf_init_br2" 00:13:06.300 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@163 -- # true 00:13:06.300 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:06.563 Cannot find device "nvmf_tgt_br" 00:13:06.563 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@164 -- # true 00:13:06.563 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:06.563 Cannot find device "nvmf_tgt_br2" 00:13:06.563 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@165 -- # true 00:13:06.563 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:06.563 Cannot find device "nvmf_init_br" 00:13:06.563 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@166 -- # true 00:13:06.563 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:06.563 Cannot find device "nvmf_init_br2" 00:13:06.563 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@167 -- # true 00:13:06.563 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:06.563 Cannot find device "nvmf_tgt_br" 00:13:06.563 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@168 -- # true 00:13:06.563 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:06.563 Cannot find device "nvmf_tgt_br2" 00:13:06.563 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@169 -- # true 00:13:06.563 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:06.563 Cannot find device "nvmf_br" 00:13:06.563 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@170 -- # true 00:13:06.563 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:06.563 Cannot find device "nvmf_init_if" 00:13:06.563 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@171 -- # true 00:13:06.563 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:06.563 Cannot find device "nvmf_init_if2" 00:13:06.563 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@172 -- # true 00:13:06.563 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:06.563 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:06.563 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@173 -- # true 00:13:06.563 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:06.563 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:06.563 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@174 -- # true 00:13:06.563 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:06.563 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:06.563 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:06.563 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:06.563 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:06.563 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:06.563 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:06.563 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:06.563 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:06.563 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:06.563 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:06.563 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:06.563 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:06.563 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:06.563 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:06.563 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:06.823 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:06.823 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:06.823 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:06.823 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:06.823 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:06.823 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:06.823 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:06.823 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:06.823 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:06.823 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:06.823 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:06.823 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:06.823 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:06.823 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:06.823 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:06.823 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:06.823 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:06.823 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:06.823 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.142 ms 00:13:06.823 00:13:06.823 --- 10.0.0.3 ping statistics --- 00:13:06.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:06.823 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:13:06.823 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:06.823 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:06.823 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.070 ms 00:13:06.823 00:13:06.823 --- 10.0.0.4 ping statistics --- 00:13:06.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:06.823 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:13:06.823 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:06.823 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:06.823 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:13:06.823 00:13:06.823 --- 10.0.0.1 ping statistics --- 00:13:06.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:06.823 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:13:06.823 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:06.823 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:06.823 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:13:06.823 00:13:06.823 --- 10.0.0.2 ping statistics --- 00:13:06.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:06.823 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:13:06.823 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:06.823 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@461 -- # return 0 00:13:06.823 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:06.823 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:06.823 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:06.823 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:06.823 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:06.823 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:06.824 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:06.824 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:06.824 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:06.824 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:06.824 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:06.824 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=74046 00:13:06.824 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:06.824 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 74046 00:13:06.824 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 74046 ']' 00:13:06.824 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:06.824 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:06.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:06.824 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:06.824 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:06.824 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:06.824 [2024-11-20 11:25:52.357844] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:13:06.824 [2024-11-20 11:25:52.357950] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:07.083 [2024-11-20 11:25:52.507276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:07.083 [2024-11-20 11:25:52.540685] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:07.083 [2024-11-20 11:25:52.540740] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:07.083 [2024-11-20 11:25:52.540752] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:07.083 [2024-11-20 11:25:52.540760] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:07.083 [2024-11-20 11:25:52.540767] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:07.083 [2024-11-20 11:25:52.541562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:07.083 [2024-11-20 11:25:52.541634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:07.083 [2024-11-20 11:25:52.541775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:07.083 [2024-11-20 11:25:52.541783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:07.083 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:07.083 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:13:07.083 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:07.084 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:07.084 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:07.084 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:07.084 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:07.084 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode10379 00:13:07.342 [2024-11-20 11:25:52.917841] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:07.600 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='2024/11/20 11:25:52 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode10379 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:13:07.600 request: 00:13:07.600 { 00:13:07.600 "method": "nvmf_create_subsystem", 00:13:07.600 "params": { 00:13:07.600 "nqn": "nqn.2016-06.io.spdk:cnode10379", 00:13:07.600 "tgt_name": "foobar" 00:13:07.600 } 00:13:07.600 } 00:13:07.600 Got JSON-RPC error response 00:13:07.600 GoRPCClient: error on JSON-RPC call' 00:13:07.600 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ 2024/11/20 11:25:52 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode10379 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:13:07.600 request: 00:13:07.600 { 00:13:07.600 "method": "nvmf_create_subsystem", 00:13:07.600 "params": { 00:13:07.600 "nqn": "nqn.2016-06.io.spdk:cnode10379", 00:13:07.600 "tgt_name": "foobar" 00:13:07.600 } 00:13:07.600 } 00:13:07.600 Got JSON-RPC error response 00:13:07.600 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:07.600 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:07.600 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode19468 00:13:07.858 [2024-11-20 11:25:53.298217] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19468: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:07.858 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='2024/11/20 11:25:53 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode19468 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:13:07.858 request: 00:13:07.858 { 00:13:07.858 "method": "nvmf_create_subsystem", 00:13:07.858 "params": { 00:13:07.858 "nqn": "nqn.2016-06.io.spdk:cnode19468", 00:13:07.858 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:13:07.858 } 00:13:07.858 } 00:13:07.858 Got JSON-RPC error response 00:13:07.858 GoRPCClient: error on JSON-RPC call' 00:13:07.858 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ 2024/11/20 11:25:53 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode19468 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:13:07.858 request: 00:13:07.858 { 00:13:07.858 "method": "nvmf_create_subsystem", 00:13:07.858 "params": { 00:13:07.858 "nqn": "nqn.2016-06.io.spdk:cnode19468", 00:13:07.858 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:13:07.858 } 00:13:07.858 } 00:13:07.858 Got JSON-RPC error response 00:13:07.858 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:07.858 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:07.858 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode31759 00:13:08.116 [2024-11-20 11:25:53.702576] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31759: invalid model number 'SPDK_Controller' 00:13:08.375 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='2024/11/20 11:25:53 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode31759], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:13:08.375 request: 00:13:08.375 { 00:13:08.375 "method": "nvmf_create_subsystem", 00:13:08.375 "params": { 00:13:08.376 "nqn": "nqn.2016-06.io.spdk:cnode31759", 00:13:08.376 "model_number": "SPDK_Controller\u001f" 00:13:08.376 } 00:13:08.376 } 00:13:08.376 Got JSON-RPC error response 00:13:08.376 GoRPCClient: error on JSON-RPC call' 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ 2024/11/20 11:25:53 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode31759], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:13:08.376 request: 00:13:08.376 { 00:13:08.376 "method": "nvmf_create_subsystem", 00:13:08.376 "params": { 00:13:08.376 "nqn": "nqn.2016-06.io.spdk:cnode31759", 00:13:08.376 "model_number": "SPDK_Controller\u001f" 00:13:08.376 } 00:13:08.376 } 00:13:08.376 Got JSON-RPC error response 00:13:08.376 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.376 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.377 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:13:08.377 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:13:08.377 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:13:08.377 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.377 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.377 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:13:08.377 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:08.377 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:13:08.377 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.377 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.377 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ y == \- ]] 00:13:08.377 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'yU|X"n_PWW^TIry{AB=7F' 00:13:08.377 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s 'yU|X"n_PWW^TIry{AB=7F' nqn.2016-06.io.spdk:cnode11012 00:13:08.636 [2024-11-20 11:25:54.054908] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11012: invalid serial number 'yU|X"n_PWW^TIry{AB=7F' 00:13:08.636 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='2024/11/20 11:25:54 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode11012 serial_number:yU|X"n_PWW^TIry{AB=7F], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN yU|X"n_PWW^TIry{AB=7F 00:13:08.636 request: 00:13:08.636 { 00:13:08.636 "method": "nvmf_create_subsystem", 00:13:08.636 "params": { 00:13:08.636 "nqn": "nqn.2016-06.io.spdk:cnode11012", 00:13:08.636 "serial_number": "yU|X\"n_PWW^TIry{AB=7F" 00:13:08.636 } 00:13:08.636 } 00:13:08.636 Got JSON-RPC error response 00:13:08.636 GoRPCClient: error on JSON-RPC call' 00:13:08.636 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ 2024/11/20 11:25:54 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode11012 serial_number:yU|X"n_PWW^TIry{AB=7F], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN yU|X"n_PWW^TIry{AB=7F 00:13:08.636 request: 00:13:08.636 { 00:13:08.636 "method": "nvmf_create_subsystem", 00:13:08.636 "params": { 00:13:08.636 "nqn": "nqn.2016-06.io.spdk:cnode11012", 00:13:08.636 "serial_number": "yU|X\"n_PWW^TIry{AB=7F" 00:13:08.636 } 00:13:08.636 } 00:13:08.636 Got JSON-RPC error response 00:13:08.636 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:08.636 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:08.636 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:08.636 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:08.636 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:08.636 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:08.636 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:08.636 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.636 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:13:08.636 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:13:08.636 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:13:08.636 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.636 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.636 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.637 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.638 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:13:08.638 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:08.638 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:13:08.638 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.638 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.638 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:13:08.638 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:13:08.638 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:13:08.638 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.638 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.638 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:13:08.638 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:13:08.638 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:13:08.638 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.638 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.638 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:13:08.638 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:13:08.638 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:13:08.638 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.638 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.638 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:13:08.638 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:13:08.638 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:13:08.638 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.638 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.638 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:13:08.638 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:13:08.638 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:13:08.638 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.638 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.638 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:13:08.638 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:13:08.638 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:13:08.638 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.638 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.638 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:13:08.638 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:13:08.638 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:13:08.638 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.638 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.896 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:13:08.896 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:13:08.896 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:13:08.896 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.896 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.896 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:13:08.896 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:08.896 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:13:08.896 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.897 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.897 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:13:08.897 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:13:08.897 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:13:08.897 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.897 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.897 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:13:08.897 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:13:08.897 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:13:08.897 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.897 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.897 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:13:08.897 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:13:08.897 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:13:08.897 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.897 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.897 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:13:08.897 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:08.897 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:13:08.897 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.897 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.897 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:13:08.897 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:13:08.897 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:13:08.897 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.897 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.897 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:13:08.897 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:13:08.897 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:13:08.897 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.897 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.897 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ l == \- ]] 00:13:08.897 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'l\u"z]a5>l"c5h81 >G8'\''b4C.vM=1ob:ZC#GCH_K)' 00:13:08.897 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d 'l\u"z]a5>l"c5h81 >G8'\''b4C.vM=1ob:ZC#GCH_K)' nqn.2016-06.io.spdk:cnode4573 00:13:09.155 [2024-11-20 11:25:54.571383] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4573: invalid model number 'l\u"z]a5>l"c5h81 >G8'b4C.vM=1ob:ZC#GCH_K)' 00:13:09.155 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='2024/11/20 11:25:54 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:l\u"z]a5>l"c5h81 >G8'\''b4C.vM=1ob:ZC#GCH_K) nqn:nqn.2016-06.io.spdk:cnode4573], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN l\u"z]a5>l"c5h81 >G8'\''b4C.vM=1ob:ZC#GCH_K) 00:13:09.156 request: 00:13:09.156 { 00:13:09.156 "method": "nvmf_create_subsystem", 00:13:09.156 "params": { 00:13:09.156 "nqn": "nqn.2016-06.io.spdk:cnode4573", 00:13:09.156 "model_number": "l\\u\"z]a5>l\"c5h81 >G8'\''b4C.vM=1ob:ZC#GCH_K)" 00:13:09.156 } 00:13:09.156 } 00:13:09.156 Got JSON-RPC error response 00:13:09.156 GoRPCClient: error on JSON-RPC call' 00:13:09.156 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ 2024/11/20 11:25:54 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:l\u"z]a5>l"c5h81 >G8'b4C.vM=1ob:ZC#GCH_K) nqn:nqn.2016-06.io.spdk:cnode4573], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN l\u"z]a5>l"c5h81 >G8'b4C.vM=1ob:ZC#GCH_K) 00:13:09.156 request: 00:13:09.156 { 00:13:09.156 "method": "nvmf_create_subsystem", 00:13:09.156 "params": { 00:13:09.156 "nqn": "nqn.2016-06.io.spdk:cnode4573", 00:13:09.156 "model_number": "l\\u\"z]a5>l\"c5h81 >G8'b4C.vM=1ob:ZC#GCH_K)" 00:13:09.156 } 00:13:09.156 } 00:13:09.156 Got JSON-RPC error response 00:13:09.156 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:09.156 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:09.414 [2024-11-20 11:25:54.907833] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:09.414 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:09.671 11:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:09.671 11:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:13:09.671 11:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:13:09.672 11:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:13:09.672 11:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:09.962 [2024-11-20 11:25:55.512503] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:09.962 11:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='2024/11/20 11:25:55 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:13:09.962 request: 00:13:09.962 { 00:13:09.963 "method": "nvmf_subsystem_remove_listener", 00:13:09.963 "params": { 00:13:09.963 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:09.963 "listen_address": { 00:13:09.963 "trtype": "tcp", 00:13:09.963 "traddr": "", 00:13:09.963 "trsvcid": "4421" 00:13:09.963 } 00:13:09.963 } 00:13:09.963 } 00:13:09.963 Got JSON-RPC error response 00:13:09.963 GoRPCClient: error on JSON-RPC call' 00:13:09.963 11:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ 2024/11/20 11:25:55 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:13:09.963 request: 00:13:09.963 { 00:13:09.963 "method": "nvmf_subsystem_remove_listener", 00:13:09.963 "params": { 00:13:09.963 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:09.963 "listen_address": { 00:13:09.963 "trtype": "tcp", 00:13:09.963 "traddr": "", 00:13:09.963 "trsvcid": "4421" 00:13:09.963 } 00:13:09.963 } 00:13:09.963 } 00:13:09.963 Got JSON-RPC error response 00:13:09.963 GoRPCClient: error on JSON-RPC call != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:09.963 11:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20120 -i 0 00:13:10.221 [2024-11-20 11:25:55.800598] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20120: invalid cntlid range [0-65519] 00:13:10.478 11:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='2024/11/20 11:25:55 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode20120], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:13:10.478 request: 00:13:10.478 { 00:13:10.478 "method": "nvmf_create_subsystem", 00:13:10.478 "params": { 00:13:10.478 "nqn": "nqn.2016-06.io.spdk:cnode20120", 00:13:10.478 "min_cntlid": 0 00:13:10.478 } 00:13:10.478 } 00:13:10.478 Got JSON-RPC error response 00:13:10.478 GoRPCClient: error on JSON-RPC call' 00:13:10.478 11:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ 2024/11/20 11:25:55 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode20120], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:13:10.478 request: 00:13:10.478 { 00:13:10.478 "method": "nvmf_create_subsystem", 00:13:10.479 "params": { 00:13:10.479 "nqn": "nqn.2016-06.io.spdk:cnode20120", 00:13:10.479 "min_cntlid": 0 00:13:10.479 } 00:13:10.479 } 00:13:10.479 Got JSON-RPC error response 00:13:10.479 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:10.479 11:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13095 -i 65520 00:13:10.737 [2024-11-20 11:25:56.076845] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13095: invalid cntlid range [65520-65519] 00:13:10.737 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='2024/11/20 11:25:56 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode13095], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:13:10.737 request: 00:13:10.737 { 00:13:10.737 "method": "nvmf_create_subsystem", 00:13:10.737 "params": { 00:13:10.737 "nqn": "nqn.2016-06.io.spdk:cnode13095", 00:13:10.737 "min_cntlid": 65520 00:13:10.737 } 00:13:10.737 } 00:13:10.737 Got JSON-RPC error response 00:13:10.737 GoRPCClient: error on JSON-RPC call' 00:13:10.737 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ 2024/11/20 11:25:56 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode13095], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:13:10.737 request: 00:13:10.737 { 00:13:10.737 "method": "nvmf_create_subsystem", 00:13:10.737 "params": { 00:13:10.737 "nqn": "nqn.2016-06.io.spdk:cnode13095", 00:13:10.737 "min_cntlid": 65520 00:13:10.737 } 00:13:10.737 } 00:13:10.737 Got JSON-RPC error response 00:13:10.737 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:10.737 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode24368 -I 0 00:13:10.996 [2024-11-20 11:25:56.413728] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24368: invalid cntlid range [1-0] 00:13:10.996 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='2024/11/20 11:25:56 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode24368], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:13:10.996 request: 00:13:10.996 { 00:13:10.996 "method": "nvmf_create_subsystem", 00:13:10.996 "params": { 00:13:10.996 "nqn": "nqn.2016-06.io.spdk:cnode24368", 00:13:10.996 "max_cntlid": 0 00:13:10.996 } 00:13:10.996 } 00:13:10.996 Got JSON-RPC error response 00:13:10.996 GoRPCClient: error on JSON-RPC call' 00:13:10.996 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ 2024/11/20 11:25:56 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode24368], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:13:10.996 request: 00:13:10.996 { 00:13:10.996 "method": "nvmf_create_subsystem", 00:13:10.996 "params": { 00:13:10.996 "nqn": "nqn.2016-06.io.spdk:cnode24368", 00:13:10.996 "max_cntlid": 0 00:13:10.996 } 00:13:10.996 } 00:13:10.996 Got JSON-RPC error response 00:13:10.996 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:10.996 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode24411 -I 65520 00:13:11.254 [2024-11-20 11:25:56.762069] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24411: invalid cntlid range [1-65520] 00:13:11.255 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='2024/11/20 11:25:56 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode24411], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:13:11.255 request: 00:13:11.255 { 00:13:11.255 "method": "nvmf_create_subsystem", 00:13:11.255 "params": { 00:13:11.255 "nqn": "nqn.2016-06.io.spdk:cnode24411", 00:13:11.255 "max_cntlid": 65520 00:13:11.255 } 00:13:11.255 } 00:13:11.255 Got JSON-RPC error response 00:13:11.255 GoRPCClient: error on JSON-RPC call' 00:13:11.255 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ 2024/11/20 11:25:56 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode24411], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:13:11.255 request: 00:13:11.255 { 00:13:11.255 "method": "nvmf_create_subsystem", 00:13:11.255 "params": { 00:13:11.255 "nqn": "nqn.2016-06.io.spdk:cnode24411", 00:13:11.255 "max_cntlid": 65520 00:13:11.255 } 00:13:11.255 } 00:13:11.255 Got JSON-RPC error response 00:13:11.255 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:11.255 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8758 -i 6 -I 5 00:13:11.514 [2024-11-20 11:25:57.046309] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8758: invalid cntlid range [6-5] 00:13:11.514 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='2024/11/20 11:25:57 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode8758], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:13:11.514 request: 00:13:11.514 { 00:13:11.514 "method": "nvmf_create_subsystem", 00:13:11.514 "params": { 00:13:11.514 "nqn": "nqn.2016-06.io.spdk:cnode8758", 00:13:11.514 "min_cntlid": 6, 00:13:11.514 "max_cntlid": 5 00:13:11.514 } 00:13:11.514 } 00:13:11.514 Got JSON-RPC error response 00:13:11.514 GoRPCClient: error on JSON-RPC call' 00:13:11.514 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ 2024/11/20 11:25:57 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode8758], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:13:11.514 request: 00:13:11.514 { 00:13:11.514 "method": "nvmf_create_subsystem", 00:13:11.514 "params": { 00:13:11.514 "nqn": "nqn.2016-06.io.spdk:cnode8758", 00:13:11.514 "min_cntlid": 6, 00:13:11.514 "max_cntlid": 5 00:13:11.514 } 00:13:11.514 } 00:13:11.514 Got JSON-RPC error response 00:13:11.514 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:11.514 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:11.772 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:13:11.772 { 00:13:11.772 "name": "foobar", 00:13:11.772 "method": "nvmf_delete_target", 00:13:11.772 "req_id": 1 00:13:11.772 } 00:13:11.772 Got JSON-RPC error response 00:13:11.772 response: 00:13:11.772 { 00:13:11.772 "code": -32602, 00:13:11.772 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:11.772 }' 00:13:11.772 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:13:11.772 { 00:13:11.772 "name": "foobar", 00:13:11.772 "method": "nvmf_delete_target", 00:13:11.772 "req_id": 1 00:13:11.772 } 00:13:11.772 Got JSON-RPC error response 00:13:11.772 response: 00:13:11.772 { 00:13:11.772 "code": -32602, 00:13:11.772 "message": "The specified target doesn't exist, cannot delete it." 00:13:11.772 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:11.772 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:11.772 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:13:11.772 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:11.772 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:13:11.772 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:11.772 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:13:11.772 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:11.772 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:11.772 rmmod nvme_tcp 00:13:11.772 rmmod nvme_fabrics 00:13:11.772 rmmod nvme_keyring 00:13:11.772 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:11.772 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:13:11.772 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:13:11.772 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 74046 ']' 00:13:11.772 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 74046 00:13:11.772 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 74046 ']' 00:13:11.772 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 74046 00:13:11.772 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:13:11.772 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:11.772 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74046 00:13:11.772 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:11.772 killing process with pid 74046 00:13:11.772 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:11.772 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74046' 00:13:11.772 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 74046 00:13:11.772 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 74046 00:13:12.031 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:12.031 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:12.031 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:12.031 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:13:12.031 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:13:12.031 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:13:12.031 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:12.031 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:12.031 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:12.031 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:12.031 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:12.031 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:12.031 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:12.031 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:12.031 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:12.031 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:12.031 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:12.031 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:12.031 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:12.289 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:12.289 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:12.289 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:12.289 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:12.289 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:12.289 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:12.289 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:12.289 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@300 -- # return 0 00:13:12.289 00:13:12.289 real 0m6.027s 00:13:12.289 user 0m23.559s 00:13:12.289 sys 0m1.255s 00:13:12.289 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:12.289 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:12.289 ************************************ 00:13:12.289 END TEST nvmf_invalid 00:13:12.289 ************************************ 00:13:12.289 11:25:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:12.289 11:25:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:12.289 11:25:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:12.290 11:25:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:12.290 ************************************ 00:13:12.290 START TEST nvmf_connect_stress 00:13:12.290 ************************************ 00:13:12.290 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:12.290 * Looking for test storage... 00:13:12.290 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:12.290 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:12.290 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:13:12.290 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:12.548 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:12.548 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:12.548 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:12.548 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:12.548 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:13:12.548 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:13:12.548 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:13:12.548 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:13:12.548 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:13:12.548 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:13:12.548 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:13:12.548 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:12.548 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:13:12.548 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:13:12.548 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:12.548 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:12.548 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:13:12.548 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:13:12.548 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:12.548 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:13:12.548 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:13:12.548 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:13:12.548 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:13:12.548 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:12.548 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:13:12.548 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:13:12.548 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:12.548 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:12.548 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:12.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:12.549 --rc genhtml_branch_coverage=1 00:13:12.549 --rc genhtml_function_coverage=1 00:13:12.549 --rc genhtml_legend=1 00:13:12.549 --rc geninfo_all_blocks=1 00:13:12.549 --rc geninfo_unexecuted_blocks=1 00:13:12.549 00:13:12.549 ' 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:12.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:12.549 --rc genhtml_branch_coverage=1 00:13:12.549 --rc genhtml_function_coverage=1 00:13:12.549 --rc genhtml_legend=1 00:13:12.549 --rc geninfo_all_blocks=1 00:13:12.549 --rc geninfo_unexecuted_blocks=1 00:13:12.549 00:13:12.549 ' 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:12.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:12.549 --rc genhtml_branch_coverage=1 00:13:12.549 --rc genhtml_function_coverage=1 00:13:12.549 --rc genhtml_legend=1 00:13:12.549 --rc geninfo_all_blocks=1 00:13:12.549 --rc geninfo_unexecuted_blocks=1 00:13:12.549 00:13:12.549 ' 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:12.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:12.549 --rc genhtml_branch_coverage=1 00:13:12.549 --rc genhtml_function_coverage=1 00:13:12.549 --rc genhtml_legend=1 00:13:12.549 --rc geninfo_all_blocks=1 00:13:12.549 --rc geninfo_unexecuted_blocks=1 00:13:12.549 00:13:12.549 ' 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=806d442c-08ba-4719-b76d-33169283459a 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:12.549 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:12.549 Cannot find device "nvmf_init_br" 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@162 -- # true 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:12.549 Cannot find device "nvmf_init_br2" 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@163 -- # true 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:12.549 Cannot find device "nvmf_tgt_br" 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@164 -- # true 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:12.549 Cannot find device "nvmf_tgt_br2" 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@165 -- # true 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:12.549 Cannot find device "nvmf_init_br" 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@166 -- # true 00:13:12.549 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:12.549 Cannot find device "nvmf_init_br2" 00:13:12.549 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@167 -- # true 00:13:12.549 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:12.549 Cannot find device "nvmf_tgt_br" 00:13:12.549 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@168 -- # true 00:13:12.549 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:12.549 Cannot find device "nvmf_tgt_br2" 00:13:12.549 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@169 -- # true 00:13:12.549 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:12.549 Cannot find device "nvmf_br" 00:13:12.549 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@170 -- # true 00:13:12.549 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:12.549 Cannot find device "nvmf_init_if" 00:13:12.549 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@171 -- # true 00:13:12.549 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:12.549 Cannot find device "nvmf_init_if2" 00:13:12.549 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@172 -- # true 00:13:12.549 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:12.549 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:12.549 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@173 -- # true 00:13:12.550 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:12.550 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:12.550 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@174 -- # true 00:13:12.550 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:12.550 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:12.550 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:12.550 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:12.550 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:12.550 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:12.550 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:12.807 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:12.807 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:12.807 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:12.807 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:12.807 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:12.807 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:12.807 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:12.807 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:12.807 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:12.807 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:12.807 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:12.807 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:12.807 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:12.807 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:12.807 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:12.807 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:12.807 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:12.807 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:12.807 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:12.807 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:12.807 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:12.808 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:12.808 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:12.808 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:12.808 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:12.808 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:12.808 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:12.808 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:13:12.808 00:13:12.808 --- 10.0.0.3 ping statistics --- 00:13:12.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:12.808 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:13:12.808 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:12.808 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:12.808 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:13:12.808 00:13:12.808 --- 10.0.0.4 ping statistics --- 00:13:12.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:12.808 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:13:12.808 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:12.808 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:12.808 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:13:12.808 00:13:12.808 --- 10.0.0.1 ping statistics --- 00:13:12.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:12.808 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:13:12.808 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:12.808 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:12.808 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.043 ms 00:13:12.808 00:13:12.808 --- 10.0.0.2 ping statistics --- 00:13:12.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:12.808 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:13:12.808 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:12.808 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@461 -- # return 0 00:13:12.808 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:12.808 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:12.808 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:12.808 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:12.808 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:12.808 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:12.808 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:12.808 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:12.808 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:12.808 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:12.808 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:12.808 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=74593 00:13:12.808 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:12.808 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 74593 00:13:12.808 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 74593 ']' 00:13:12.808 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:12.808 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:12.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:12.808 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:12.808 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:12.808 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:12.808 [2024-11-20 11:25:58.382240] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:13:12.808 [2024-11-20 11:25:58.382327] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:13.065 [2024-11-20 11:25:58.568368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:13.065 [2024-11-20 11:25:58.615962] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:13.065 [2024-11-20 11:25:58.616029] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:13.065 [2024-11-20 11:25:58.616046] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:13.065 [2024-11-20 11:25:58.616058] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:13.065 [2024-11-20 11:25:58.616069] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:13.065 [2024-11-20 11:25:58.617292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:13.065 [2024-11-20 11:25:58.617421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:13.065 [2024-11-20 11:25:58.617428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:13.999 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:13.999 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:13:13.999 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:13.999 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:13.999 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:13.999 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:13.999 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:13.999 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.999 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:13.999 [2024-11-20 11:25:59.432113] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:13.999 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.999 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:13.999 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.999 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:13.999 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.999 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:13.999 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.999 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:13.999 [2024-11-20 11:25:59.452279] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:13.999 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.999 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:13.999 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.000 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:14.000 NULL1 00:13:14.000 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.000 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=74645 00:13:14.000 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:14.000 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:13:14.000 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:13:14.000 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:14.000 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:14.000 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:14.000 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:14.000 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:14.000 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:14.000 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:14.000 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:14.000 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:14.000 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:14.000 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:14.000 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:14.000 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:14.000 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:14.000 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:14.000 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:14.000 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:14.000 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:14.000 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:14.000 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:14.000 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:14.000 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:14.000 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:14.000 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:14.000 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:14.000 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:14.000 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:14.000 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:14.000 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:14.000 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:14.000 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:14.000 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:14.000 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:14.000 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:14.000 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:14.000 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:14.000 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:14.000 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:14.000 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:14.000 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:14.000 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:14.000 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74645 00:13:14.000 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:14.000 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.000 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:14.566 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.566 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74645 00:13:14.566 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:14.566 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.566 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:14.825 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.825 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74645 00:13:14.825 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:14.825 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.825 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:15.084 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.084 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74645 00:13:15.084 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:15.084 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.084 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:15.343 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.343 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74645 00:13:15.343 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:15.343 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.343 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:15.603 11:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.603 11:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74645 00:13:15.603 11:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:15.603 11:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.603 11:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:16.222 11:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.222 11:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74645 00:13:16.222 11:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:16.222 11:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.222 11:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:16.222 11:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.222 11:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74645 00:13:16.222 11:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:16.222 11:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.222 11:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:16.789 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.789 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74645 00:13:16.789 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:16.789 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.789 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:17.047 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.047 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74645 00:13:17.047 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:17.047 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.047 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:17.306 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.306 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74645 00:13:17.306 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:17.306 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.306 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:17.564 11:26:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.564 11:26:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74645 00:13:17.564 11:26:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:17.564 11:26:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.564 11:26:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:17.822 11:26:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.822 11:26:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74645 00:13:17.822 11:26:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:17.822 11:26:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.822 11:26:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:18.389 11:26:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.389 11:26:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74645 00:13:18.389 11:26:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:18.389 11:26:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.389 11:26:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:18.648 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.648 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74645 00:13:18.648 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:18.648 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.648 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:18.907 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.907 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74645 00:13:18.907 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:18.907 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.907 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:19.165 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.165 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74645 00:13:19.165 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:19.165 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.165 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:19.423 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.424 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74645 00:13:19.424 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:19.424 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.424 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:19.990 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.990 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74645 00:13:19.990 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:19.990 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.990 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:20.248 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.248 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74645 00:13:20.248 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:20.248 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.248 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:20.505 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.505 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74645 00:13:20.505 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:20.505 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.505 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:20.763 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.763 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74645 00:13:20.763 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:20.763 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.763 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:21.021 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.021 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74645 00:13:21.021 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:21.021 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.021 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:21.624 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.624 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74645 00:13:21.624 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:21.624 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.624 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:21.900 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.900 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74645 00:13:21.900 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:21.900 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.900 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.159 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.159 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74645 00:13:22.159 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:22.159 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.159 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.418 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.418 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74645 00:13:22.418 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:22.418 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.418 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.675 11:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.675 11:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74645 00:13:22.675 11:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:22.675 11:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.675 11:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.932 11:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.932 11:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74645 00:13:22.932 11:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:22.932 11:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.932 11:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:23.496 11:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.496 11:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74645 00:13:23.496 11:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:23.496 11:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.496 11:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:23.754 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.754 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74645 00:13:23.754 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:23.754 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.754 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:24.012 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.012 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74645 00:13:24.012 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:24.012 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.012 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:24.269 Testing NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:13:24.269 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.269 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74645 00:13:24.269 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (74645) - No such process 00:13:24.269 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 74645 00:13:24.269 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:13:24.269 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:24.269 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:24.269 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:24.269 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:13:24.269 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:24.269 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:13:24.269 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:24.269 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:24.269 rmmod nvme_tcp 00:13:24.269 rmmod nvme_fabrics 00:13:24.527 rmmod nvme_keyring 00:13:24.527 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:24.527 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:13:24.527 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:13:24.527 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 74593 ']' 00:13:24.527 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 74593 00:13:24.527 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 74593 ']' 00:13:24.527 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 74593 00:13:24.527 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:13:24.527 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:24.527 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74593 00:13:24.527 killing process with pid 74593 00:13:24.527 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:24.527 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:24.527 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74593' 00:13:24.527 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 74593 00:13:24.527 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 74593 00:13:24.527 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:24.527 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:24.527 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:24.527 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:13:24.527 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:24.527 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:13:24.527 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:13:24.527 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:24.527 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:24.527 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:24.528 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:24.528 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:24.528 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:24.786 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:24.786 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:24.786 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:24.786 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:24.786 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:24.786 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:24.786 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:24.786 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:24.786 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:24.786 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:24.786 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:24.786 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:24.786 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:24.786 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@300 -- # return 0 00:13:24.786 ************************************ 00:13:24.786 END TEST nvmf_connect_stress 00:13:24.786 ************************************ 00:13:24.786 00:13:24.786 real 0m12.566s 00:13:24.786 user 0m40.845s 00:13:24.786 sys 0m3.429s 00:13:24.786 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:24.786 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:24.786 11:26:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:24.786 11:26:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:24.786 11:26:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:24.786 11:26:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:24.786 ************************************ 00:13:24.786 START TEST nvmf_fused_ordering 00:13:24.786 ************************************ 00:13:24.786 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:25.045 * Looking for test storage... 00:13:25.045 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:25.045 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:25.045 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:13:25.045 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:25.045 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:25.045 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:25.045 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:25.045 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:25.045 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:13:25.045 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:13:25.045 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:13:25.045 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:13:25.045 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:13:25.045 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:13:25.045 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:13:25.045 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:25.045 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:13:25.045 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:13:25.045 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:25.045 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:25.045 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:13:25.045 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:13:25.045 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:25.045 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:13:25.045 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:13:25.045 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:13:25.045 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:13:25.045 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:25.045 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:13:25.045 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:13:25.045 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:25.045 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:25.045 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:13:25.045 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:25.045 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:25.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:25.045 --rc genhtml_branch_coverage=1 00:13:25.045 --rc genhtml_function_coverage=1 00:13:25.045 --rc genhtml_legend=1 00:13:25.045 --rc geninfo_all_blocks=1 00:13:25.045 --rc geninfo_unexecuted_blocks=1 00:13:25.045 00:13:25.045 ' 00:13:25.045 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:25.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:25.045 --rc genhtml_branch_coverage=1 00:13:25.045 --rc genhtml_function_coverage=1 00:13:25.045 --rc genhtml_legend=1 00:13:25.045 --rc geninfo_all_blocks=1 00:13:25.045 --rc geninfo_unexecuted_blocks=1 00:13:25.045 00:13:25.045 ' 00:13:25.045 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:25.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:25.045 --rc genhtml_branch_coverage=1 00:13:25.045 --rc genhtml_function_coverage=1 00:13:25.045 --rc genhtml_legend=1 00:13:25.045 --rc geninfo_all_blocks=1 00:13:25.045 --rc geninfo_unexecuted_blocks=1 00:13:25.045 00:13:25.045 ' 00:13:25.045 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:25.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:25.046 --rc genhtml_branch_coverage=1 00:13:25.046 --rc genhtml_function_coverage=1 00:13:25.046 --rc genhtml_legend=1 00:13:25.046 --rc geninfo_all_blocks=1 00:13:25.046 --rc geninfo_unexecuted_blocks=1 00:13:25.046 00:13:25.046 ' 00:13:25.046 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:25.046 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:25.046 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:25.046 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:25.046 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:25.046 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:25.046 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:25.046 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:25.046 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:25.046 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:25.046 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:25.046 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:25.046 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:13:25.046 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=806d442c-08ba-4719-b76d-33169283459a 00:13:25.046 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:25.046 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:25.046 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:25.046 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:25.046 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:25.046 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:13:25.046 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:25.046 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:25.046 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:25.046 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.046 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.046 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.046 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:25.046 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.046 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:13:25.046 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:25.046 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:25.046 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:25.046 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:25.046 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:25.046 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:25.046 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:25.046 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:25.046 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:25.046 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:25.046 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:25.046 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:25.046 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:25.046 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:25.046 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:25.046 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:25.046 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:25.046 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:25.046 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:25.046 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:25.046 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:25.046 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:25.046 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:25.046 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:25.046 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:25.046 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:25.046 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:25.046 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:25.046 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:25.046 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:25.046 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:25.046 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:25.046 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:25.046 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:25.046 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:25.046 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:25.046 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:25.046 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:25.046 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:25.046 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:25.046 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:25.046 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:25.046 Cannot find device "nvmf_init_br" 00:13:25.046 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@162 -- # true 00:13:25.046 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:25.046 Cannot find device "nvmf_init_br2" 00:13:25.046 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@163 -- # true 00:13:25.046 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:25.046 Cannot find device "nvmf_tgt_br" 00:13:25.046 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@164 -- # true 00:13:25.046 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:25.046 Cannot find device "nvmf_tgt_br2" 00:13:25.046 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@165 -- # true 00:13:25.046 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:25.046 Cannot find device "nvmf_init_br" 00:13:25.046 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@166 -- # true 00:13:25.046 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:25.046 Cannot find device "nvmf_init_br2" 00:13:25.046 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@167 -- # true 00:13:25.046 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:25.305 Cannot find device "nvmf_tgt_br" 00:13:25.305 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@168 -- # true 00:13:25.305 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:25.305 Cannot find device "nvmf_tgt_br2" 00:13:25.305 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@169 -- # true 00:13:25.305 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:25.305 Cannot find device "nvmf_br" 00:13:25.305 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@170 -- # true 00:13:25.305 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:25.305 Cannot find device "nvmf_init_if" 00:13:25.305 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@171 -- # true 00:13:25.305 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:25.305 Cannot find device "nvmf_init_if2" 00:13:25.305 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@172 -- # true 00:13:25.305 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:25.305 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:25.305 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@173 -- # true 00:13:25.305 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:25.305 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:25.305 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@174 -- # true 00:13:25.305 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:25.305 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:25.305 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:25.305 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:25.306 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:25.306 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:25.306 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:25.306 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:25.306 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:25.306 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:25.306 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:25.306 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:25.306 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:25.306 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:25.306 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:25.306 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:25.306 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:25.306 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:25.306 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:25.306 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:25.306 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:25.306 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:25.306 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:25.306 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:25.306 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:25.306 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:25.564 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:25.564 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:25.564 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:25.564 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:25.565 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:25.565 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:25.565 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:25.565 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:25.565 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:13:25.565 00:13:25.565 --- 10.0.0.3 ping statistics --- 00:13:25.565 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:25.565 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:13:25.565 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:25.565 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:25.565 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.073 ms 00:13:25.565 00:13:25.565 --- 10.0.0.4 ping statistics --- 00:13:25.565 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:25.565 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:13:25.565 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:25.565 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:25.565 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:13:25.565 00:13:25.565 --- 10.0.0.1 ping statistics --- 00:13:25.565 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:25.565 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:13:25.565 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:25.565 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:25.565 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:13:25.565 00:13:25.565 --- 10.0.0.2 ping statistics --- 00:13:25.565 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:25.565 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:13:25.565 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:25.565 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@461 -- # return 0 00:13:25.565 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:25.565 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:25.565 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:25.565 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:25.565 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:25.565 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:25.565 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:25.565 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:25.565 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:25.565 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:25.565 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:25.565 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=75022 00:13:25.565 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:25.565 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 75022 00:13:25.565 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 75022 ']' 00:13:25.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:25.565 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:25.565 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:25.565 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:25.565 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:25.565 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:25.565 [2024-11-20 11:26:11.035668] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:13:25.565 [2024-11-20 11:26:11.035769] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:25.823 [2024-11-20 11:26:11.187595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:25.823 [2024-11-20 11:26:11.229810] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:25.823 [2024-11-20 11:26:11.229883] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:25.823 [2024-11-20 11:26:11.229902] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:25.823 [2024-11-20 11:26:11.229916] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:25.823 [2024-11-20 11:26:11.229928] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:25.823 [2024-11-20 11:26:11.230331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:25.823 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:25.823 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:13:25.823 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:25.823 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:25.823 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:25.823 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:25.823 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:25.823 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.823 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:25.823 [2024-11-20 11:26:11.366932] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:25.823 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.823 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:25.823 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.823 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:25.823 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.823 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:25.823 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.823 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:25.823 [2024-11-20 11:26:11.383042] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:25.823 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.823 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:25.823 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.823 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:25.823 NULL1 00:13:25.823 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.823 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:25.823 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.823 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:25.823 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.823 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:25.823 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.823 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:26.082 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.082 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:26.082 [2024-11-20 11:26:11.433116] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:13:26.082 [2024-11-20 11:26:11.433190] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75056 ] 00:13:26.342 Attached to nqn.2016-06.io.spdk:cnode1 00:13:26.342 Namespace ID: 1 size: 1GB 00:13:26.342 fused_ordering(0) 00:13:26.342 fused_ordering(1) 00:13:26.342 fused_ordering(2) 00:13:26.342 fused_ordering(3) 00:13:26.342 fused_ordering(4) 00:13:26.342 fused_ordering(5) 00:13:26.342 fused_ordering(6) 00:13:26.342 fused_ordering(7) 00:13:26.342 fused_ordering(8) 00:13:26.342 fused_ordering(9) 00:13:26.342 fused_ordering(10) 00:13:26.342 fused_ordering(11) 00:13:26.342 fused_ordering(12) 00:13:26.342 fused_ordering(13) 00:13:26.342 fused_ordering(14) 00:13:26.342 fused_ordering(15) 00:13:26.342 fused_ordering(16) 00:13:26.342 fused_ordering(17) 00:13:26.342 fused_ordering(18) 00:13:26.342 fused_ordering(19) 00:13:26.342 fused_ordering(20) 00:13:26.342 fused_ordering(21) 00:13:26.342 fused_ordering(22) 00:13:26.342 fused_ordering(23) 00:13:26.342 fused_ordering(24) 00:13:26.342 fused_ordering(25) 00:13:26.342 fused_ordering(26) 00:13:26.342 fused_ordering(27) 00:13:26.342 fused_ordering(28) 00:13:26.342 fused_ordering(29) 00:13:26.342 fused_ordering(30) 00:13:26.342 fused_ordering(31) 00:13:26.342 fused_ordering(32) 00:13:26.342 fused_ordering(33) 00:13:26.342 fused_ordering(34) 00:13:26.342 fused_ordering(35) 00:13:26.342 fused_ordering(36) 00:13:26.342 fused_ordering(37) 00:13:26.342 fused_ordering(38) 00:13:26.342 fused_ordering(39) 00:13:26.342 fused_ordering(40) 00:13:26.342 fused_ordering(41) 00:13:26.342 fused_ordering(42) 00:13:26.342 fused_ordering(43) 00:13:26.342 fused_ordering(44) 00:13:26.342 fused_ordering(45) 00:13:26.342 fused_ordering(46) 00:13:26.342 fused_ordering(47) 00:13:26.342 fused_ordering(48) 00:13:26.342 fused_ordering(49) 00:13:26.342 fused_ordering(50) 00:13:26.342 fused_ordering(51) 00:13:26.342 fused_ordering(52) 00:13:26.342 fused_ordering(53) 00:13:26.342 fused_ordering(54) 00:13:26.342 fused_ordering(55) 00:13:26.342 fused_ordering(56) 00:13:26.342 fused_ordering(57) 00:13:26.342 fused_ordering(58) 00:13:26.342 fused_ordering(59) 00:13:26.342 fused_ordering(60) 00:13:26.342 fused_ordering(61) 00:13:26.342 fused_ordering(62) 00:13:26.342 fused_ordering(63) 00:13:26.342 fused_ordering(64) 00:13:26.342 fused_ordering(65) 00:13:26.342 fused_ordering(66) 00:13:26.342 fused_ordering(67) 00:13:26.342 fused_ordering(68) 00:13:26.342 fused_ordering(69) 00:13:26.342 fused_ordering(70) 00:13:26.342 fused_ordering(71) 00:13:26.342 fused_ordering(72) 00:13:26.342 fused_ordering(73) 00:13:26.342 fused_ordering(74) 00:13:26.342 fused_ordering(75) 00:13:26.342 fused_ordering(76) 00:13:26.342 fused_ordering(77) 00:13:26.342 fused_ordering(78) 00:13:26.342 fused_ordering(79) 00:13:26.342 fused_ordering(80) 00:13:26.342 fused_ordering(81) 00:13:26.342 fused_ordering(82) 00:13:26.342 fused_ordering(83) 00:13:26.342 fused_ordering(84) 00:13:26.342 fused_ordering(85) 00:13:26.342 fused_ordering(86) 00:13:26.342 fused_ordering(87) 00:13:26.342 fused_ordering(88) 00:13:26.342 fused_ordering(89) 00:13:26.342 fused_ordering(90) 00:13:26.342 fused_ordering(91) 00:13:26.342 fused_ordering(92) 00:13:26.342 fused_ordering(93) 00:13:26.342 fused_ordering(94) 00:13:26.342 fused_ordering(95) 00:13:26.342 fused_ordering(96) 00:13:26.342 fused_ordering(97) 00:13:26.342 fused_ordering(98) 00:13:26.342 fused_ordering(99) 00:13:26.342 fused_ordering(100) 00:13:26.342 fused_ordering(101) 00:13:26.342 fused_ordering(102) 00:13:26.342 fused_ordering(103) 00:13:26.342 fused_ordering(104) 00:13:26.342 fused_ordering(105) 00:13:26.342 fused_ordering(106) 00:13:26.342 fused_ordering(107) 00:13:26.342 fused_ordering(108) 00:13:26.342 fused_ordering(109) 00:13:26.342 fused_ordering(110) 00:13:26.342 fused_ordering(111) 00:13:26.342 fused_ordering(112) 00:13:26.342 fused_ordering(113) 00:13:26.342 fused_ordering(114) 00:13:26.342 fused_ordering(115) 00:13:26.342 fused_ordering(116) 00:13:26.342 fused_ordering(117) 00:13:26.342 fused_ordering(118) 00:13:26.342 fused_ordering(119) 00:13:26.342 fused_ordering(120) 00:13:26.342 fused_ordering(121) 00:13:26.342 fused_ordering(122) 00:13:26.342 fused_ordering(123) 00:13:26.342 fused_ordering(124) 00:13:26.342 fused_ordering(125) 00:13:26.342 fused_ordering(126) 00:13:26.342 fused_ordering(127) 00:13:26.342 fused_ordering(128) 00:13:26.342 fused_ordering(129) 00:13:26.342 fused_ordering(130) 00:13:26.342 fused_ordering(131) 00:13:26.342 fused_ordering(132) 00:13:26.342 fused_ordering(133) 00:13:26.342 fused_ordering(134) 00:13:26.342 fused_ordering(135) 00:13:26.342 fused_ordering(136) 00:13:26.342 fused_ordering(137) 00:13:26.342 fused_ordering(138) 00:13:26.342 fused_ordering(139) 00:13:26.342 fused_ordering(140) 00:13:26.342 fused_ordering(141) 00:13:26.342 fused_ordering(142) 00:13:26.342 fused_ordering(143) 00:13:26.342 fused_ordering(144) 00:13:26.342 fused_ordering(145) 00:13:26.342 fused_ordering(146) 00:13:26.342 fused_ordering(147) 00:13:26.342 fused_ordering(148) 00:13:26.342 fused_ordering(149) 00:13:26.342 fused_ordering(150) 00:13:26.342 fused_ordering(151) 00:13:26.342 fused_ordering(152) 00:13:26.342 fused_ordering(153) 00:13:26.342 fused_ordering(154) 00:13:26.342 fused_ordering(155) 00:13:26.342 fused_ordering(156) 00:13:26.342 fused_ordering(157) 00:13:26.342 fused_ordering(158) 00:13:26.342 fused_ordering(159) 00:13:26.342 fused_ordering(160) 00:13:26.342 fused_ordering(161) 00:13:26.342 fused_ordering(162) 00:13:26.342 fused_ordering(163) 00:13:26.342 fused_ordering(164) 00:13:26.342 fused_ordering(165) 00:13:26.342 fused_ordering(166) 00:13:26.342 fused_ordering(167) 00:13:26.342 fused_ordering(168) 00:13:26.342 fused_ordering(169) 00:13:26.342 fused_ordering(170) 00:13:26.342 fused_ordering(171) 00:13:26.342 fused_ordering(172) 00:13:26.342 fused_ordering(173) 00:13:26.342 fused_ordering(174) 00:13:26.342 fused_ordering(175) 00:13:26.342 fused_ordering(176) 00:13:26.342 fused_ordering(177) 00:13:26.342 fused_ordering(178) 00:13:26.342 fused_ordering(179) 00:13:26.342 fused_ordering(180) 00:13:26.342 fused_ordering(181) 00:13:26.342 fused_ordering(182) 00:13:26.342 fused_ordering(183) 00:13:26.342 fused_ordering(184) 00:13:26.342 fused_ordering(185) 00:13:26.342 fused_ordering(186) 00:13:26.342 fused_ordering(187) 00:13:26.342 fused_ordering(188) 00:13:26.342 fused_ordering(189) 00:13:26.342 fused_ordering(190) 00:13:26.342 fused_ordering(191) 00:13:26.342 fused_ordering(192) 00:13:26.342 fused_ordering(193) 00:13:26.342 fused_ordering(194) 00:13:26.342 fused_ordering(195) 00:13:26.342 fused_ordering(196) 00:13:26.342 fused_ordering(197) 00:13:26.342 fused_ordering(198) 00:13:26.342 fused_ordering(199) 00:13:26.342 fused_ordering(200) 00:13:26.342 fused_ordering(201) 00:13:26.342 fused_ordering(202) 00:13:26.342 fused_ordering(203) 00:13:26.342 fused_ordering(204) 00:13:26.342 fused_ordering(205) 00:13:26.601 fused_ordering(206) 00:13:26.601 fused_ordering(207) 00:13:26.601 fused_ordering(208) 00:13:26.601 fused_ordering(209) 00:13:26.601 fused_ordering(210) 00:13:26.601 fused_ordering(211) 00:13:26.601 fused_ordering(212) 00:13:26.601 fused_ordering(213) 00:13:26.601 fused_ordering(214) 00:13:26.601 fused_ordering(215) 00:13:26.601 fused_ordering(216) 00:13:26.601 fused_ordering(217) 00:13:26.601 fused_ordering(218) 00:13:26.601 fused_ordering(219) 00:13:26.601 fused_ordering(220) 00:13:26.601 fused_ordering(221) 00:13:26.601 fused_ordering(222) 00:13:26.601 fused_ordering(223) 00:13:26.601 fused_ordering(224) 00:13:26.601 fused_ordering(225) 00:13:26.601 fused_ordering(226) 00:13:26.601 fused_ordering(227) 00:13:26.601 fused_ordering(228) 00:13:26.601 fused_ordering(229) 00:13:26.601 fused_ordering(230) 00:13:26.601 fused_ordering(231) 00:13:26.601 fused_ordering(232) 00:13:26.601 fused_ordering(233) 00:13:26.601 fused_ordering(234) 00:13:26.601 fused_ordering(235) 00:13:26.601 fused_ordering(236) 00:13:26.601 fused_ordering(237) 00:13:26.601 fused_ordering(238) 00:13:26.601 fused_ordering(239) 00:13:26.601 fused_ordering(240) 00:13:26.601 fused_ordering(241) 00:13:26.601 fused_ordering(242) 00:13:26.601 fused_ordering(243) 00:13:26.601 fused_ordering(244) 00:13:26.601 fused_ordering(245) 00:13:26.601 fused_ordering(246) 00:13:26.601 fused_ordering(247) 00:13:26.601 fused_ordering(248) 00:13:26.601 fused_ordering(249) 00:13:26.601 fused_ordering(250) 00:13:26.602 fused_ordering(251) 00:13:26.602 fused_ordering(252) 00:13:26.602 fused_ordering(253) 00:13:26.602 fused_ordering(254) 00:13:26.602 fused_ordering(255) 00:13:26.602 fused_ordering(256) 00:13:26.602 fused_ordering(257) 00:13:26.602 fused_ordering(258) 00:13:26.602 fused_ordering(259) 00:13:26.602 fused_ordering(260) 00:13:26.602 fused_ordering(261) 00:13:26.602 fused_ordering(262) 00:13:26.602 fused_ordering(263) 00:13:26.602 fused_ordering(264) 00:13:26.602 fused_ordering(265) 00:13:26.602 fused_ordering(266) 00:13:26.602 fused_ordering(267) 00:13:26.602 fused_ordering(268) 00:13:26.602 fused_ordering(269) 00:13:26.602 fused_ordering(270) 00:13:26.602 fused_ordering(271) 00:13:26.602 fused_ordering(272) 00:13:26.602 fused_ordering(273) 00:13:26.602 fused_ordering(274) 00:13:26.602 fused_ordering(275) 00:13:26.602 fused_ordering(276) 00:13:26.602 fused_ordering(277) 00:13:26.602 fused_ordering(278) 00:13:26.602 fused_ordering(279) 00:13:26.602 fused_ordering(280) 00:13:26.602 fused_ordering(281) 00:13:26.602 fused_ordering(282) 00:13:26.602 fused_ordering(283) 00:13:26.602 fused_ordering(284) 00:13:26.602 fused_ordering(285) 00:13:26.602 fused_ordering(286) 00:13:26.602 fused_ordering(287) 00:13:26.602 fused_ordering(288) 00:13:26.602 fused_ordering(289) 00:13:26.602 fused_ordering(290) 00:13:26.602 fused_ordering(291) 00:13:26.602 fused_ordering(292) 00:13:26.602 fused_ordering(293) 00:13:26.602 fused_ordering(294) 00:13:26.602 fused_ordering(295) 00:13:26.602 fused_ordering(296) 00:13:26.602 fused_ordering(297) 00:13:26.602 fused_ordering(298) 00:13:26.602 fused_ordering(299) 00:13:26.602 fused_ordering(300) 00:13:26.602 fused_ordering(301) 00:13:26.602 fused_ordering(302) 00:13:26.602 fused_ordering(303) 00:13:26.602 fused_ordering(304) 00:13:26.602 fused_ordering(305) 00:13:26.602 fused_ordering(306) 00:13:26.602 fused_ordering(307) 00:13:26.602 fused_ordering(308) 00:13:26.602 fused_ordering(309) 00:13:26.602 fused_ordering(310) 00:13:26.602 fused_ordering(311) 00:13:26.602 fused_ordering(312) 00:13:26.602 fused_ordering(313) 00:13:26.602 fused_ordering(314) 00:13:26.602 fused_ordering(315) 00:13:26.602 fused_ordering(316) 00:13:26.602 fused_ordering(317) 00:13:26.602 fused_ordering(318) 00:13:26.602 fused_ordering(319) 00:13:26.602 fused_ordering(320) 00:13:26.602 fused_ordering(321) 00:13:26.602 fused_ordering(322) 00:13:26.602 fused_ordering(323) 00:13:26.602 fused_ordering(324) 00:13:26.602 fused_ordering(325) 00:13:26.602 fused_ordering(326) 00:13:26.602 fused_ordering(327) 00:13:26.602 fused_ordering(328) 00:13:26.602 fused_ordering(329) 00:13:26.602 fused_ordering(330) 00:13:26.602 fused_ordering(331) 00:13:26.602 fused_ordering(332) 00:13:26.602 fused_ordering(333) 00:13:26.602 fused_ordering(334) 00:13:26.602 fused_ordering(335) 00:13:26.602 fused_ordering(336) 00:13:26.602 fused_ordering(337) 00:13:26.602 fused_ordering(338) 00:13:26.602 fused_ordering(339) 00:13:26.602 fused_ordering(340) 00:13:26.602 fused_ordering(341) 00:13:26.602 fused_ordering(342) 00:13:26.602 fused_ordering(343) 00:13:26.602 fused_ordering(344) 00:13:26.602 fused_ordering(345) 00:13:26.602 fused_ordering(346) 00:13:26.602 fused_ordering(347) 00:13:26.602 fused_ordering(348) 00:13:26.602 fused_ordering(349) 00:13:26.602 fused_ordering(350) 00:13:26.602 fused_ordering(351) 00:13:26.602 fused_ordering(352) 00:13:26.602 fused_ordering(353) 00:13:26.602 fused_ordering(354) 00:13:26.602 fused_ordering(355) 00:13:26.602 fused_ordering(356) 00:13:26.602 fused_ordering(357) 00:13:26.602 fused_ordering(358) 00:13:26.602 fused_ordering(359) 00:13:26.602 fused_ordering(360) 00:13:26.602 fused_ordering(361) 00:13:26.602 fused_ordering(362) 00:13:26.602 fused_ordering(363) 00:13:26.602 fused_ordering(364) 00:13:26.602 fused_ordering(365) 00:13:26.602 fused_ordering(366) 00:13:26.602 fused_ordering(367) 00:13:26.602 fused_ordering(368) 00:13:26.602 fused_ordering(369) 00:13:26.602 fused_ordering(370) 00:13:26.602 fused_ordering(371) 00:13:26.602 fused_ordering(372) 00:13:26.602 fused_ordering(373) 00:13:26.602 fused_ordering(374) 00:13:26.602 fused_ordering(375) 00:13:26.602 fused_ordering(376) 00:13:26.602 fused_ordering(377) 00:13:26.602 fused_ordering(378) 00:13:26.602 fused_ordering(379) 00:13:26.602 fused_ordering(380) 00:13:26.602 fused_ordering(381) 00:13:26.602 fused_ordering(382) 00:13:26.602 fused_ordering(383) 00:13:26.602 fused_ordering(384) 00:13:26.602 fused_ordering(385) 00:13:26.602 fused_ordering(386) 00:13:26.602 fused_ordering(387) 00:13:26.602 fused_ordering(388) 00:13:26.602 fused_ordering(389) 00:13:26.602 fused_ordering(390) 00:13:26.602 fused_ordering(391) 00:13:26.602 fused_ordering(392) 00:13:26.602 fused_ordering(393) 00:13:26.602 fused_ordering(394) 00:13:26.602 fused_ordering(395) 00:13:26.602 fused_ordering(396) 00:13:26.602 fused_ordering(397) 00:13:26.602 fused_ordering(398) 00:13:26.602 fused_ordering(399) 00:13:26.602 fused_ordering(400) 00:13:26.602 fused_ordering(401) 00:13:26.602 fused_ordering(402) 00:13:26.602 fused_ordering(403) 00:13:26.602 fused_ordering(404) 00:13:26.602 fused_ordering(405) 00:13:26.602 fused_ordering(406) 00:13:26.602 fused_ordering(407) 00:13:26.602 fused_ordering(408) 00:13:26.602 fused_ordering(409) 00:13:26.602 fused_ordering(410) 00:13:27.170 fused_ordering(411) 00:13:27.170 fused_ordering(412) 00:13:27.170 fused_ordering(413) 00:13:27.170 fused_ordering(414) 00:13:27.170 fused_ordering(415) 00:13:27.170 fused_ordering(416) 00:13:27.170 fused_ordering(417) 00:13:27.170 fused_ordering(418) 00:13:27.170 fused_ordering(419) 00:13:27.170 fused_ordering(420) 00:13:27.170 fused_ordering(421) 00:13:27.170 fused_ordering(422) 00:13:27.170 fused_ordering(423) 00:13:27.170 fused_ordering(424) 00:13:27.170 fused_ordering(425) 00:13:27.170 fused_ordering(426) 00:13:27.170 fused_ordering(427) 00:13:27.170 fused_ordering(428) 00:13:27.170 fused_ordering(429) 00:13:27.170 fused_ordering(430) 00:13:27.170 fused_ordering(431) 00:13:27.170 fused_ordering(432) 00:13:27.170 fused_ordering(433) 00:13:27.170 fused_ordering(434) 00:13:27.170 fused_ordering(435) 00:13:27.170 fused_ordering(436) 00:13:27.170 fused_ordering(437) 00:13:27.170 fused_ordering(438) 00:13:27.170 fused_ordering(439) 00:13:27.170 fused_ordering(440) 00:13:27.170 fused_ordering(441) 00:13:27.170 fused_ordering(442) 00:13:27.170 fused_ordering(443) 00:13:27.170 fused_ordering(444) 00:13:27.170 fused_ordering(445) 00:13:27.170 fused_ordering(446) 00:13:27.170 fused_ordering(447) 00:13:27.170 fused_ordering(448) 00:13:27.170 fused_ordering(449) 00:13:27.170 fused_ordering(450) 00:13:27.170 fused_ordering(451) 00:13:27.170 fused_ordering(452) 00:13:27.170 fused_ordering(453) 00:13:27.170 fused_ordering(454) 00:13:27.170 fused_ordering(455) 00:13:27.170 fused_ordering(456) 00:13:27.170 fused_ordering(457) 00:13:27.170 fused_ordering(458) 00:13:27.170 fused_ordering(459) 00:13:27.170 fused_ordering(460) 00:13:27.170 fused_ordering(461) 00:13:27.170 fused_ordering(462) 00:13:27.170 fused_ordering(463) 00:13:27.170 fused_ordering(464) 00:13:27.170 fused_ordering(465) 00:13:27.170 fused_ordering(466) 00:13:27.170 fused_ordering(467) 00:13:27.170 fused_ordering(468) 00:13:27.170 fused_ordering(469) 00:13:27.170 fused_ordering(470) 00:13:27.170 fused_ordering(471) 00:13:27.170 fused_ordering(472) 00:13:27.170 fused_ordering(473) 00:13:27.170 fused_ordering(474) 00:13:27.170 fused_ordering(475) 00:13:27.170 fused_ordering(476) 00:13:27.170 fused_ordering(477) 00:13:27.170 fused_ordering(478) 00:13:27.170 fused_ordering(479) 00:13:27.170 fused_ordering(480) 00:13:27.170 fused_ordering(481) 00:13:27.170 fused_ordering(482) 00:13:27.170 fused_ordering(483) 00:13:27.170 fused_ordering(484) 00:13:27.170 fused_ordering(485) 00:13:27.170 fused_ordering(486) 00:13:27.170 fused_ordering(487) 00:13:27.170 fused_ordering(488) 00:13:27.170 fused_ordering(489) 00:13:27.170 fused_ordering(490) 00:13:27.170 fused_ordering(491) 00:13:27.170 fused_ordering(492) 00:13:27.170 fused_ordering(493) 00:13:27.170 fused_ordering(494) 00:13:27.170 fused_ordering(495) 00:13:27.170 fused_ordering(496) 00:13:27.170 fused_ordering(497) 00:13:27.170 fused_ordering(498) 00:13:27.170 fused_ordering(499) 00:13:27.170 fused_ordering(500) 00:13:27.170 fused_ordering(501) 00:13:27.170 fused_ordering(502) 00:13:27.170 fused_ordering(503) 00:13:27.170 fused_ordering(504) 00:13:27.170 fused_ordering(505) 00:13:27.170 fused_ordering(506) 00:13:27.170 fused_ordering(507) 00:13:27.170 fused_ordering(508) 00:13:27.170 fused_ordering(509) 00:13:27.170 fused_ordering(510) 00:13:27.170 fused_ordering(511) 00:13:27.170 fused_ordering(512) 00:13:27.170 fused_ordering(513) 00:13:27.170 fused_ordering(514) 00:13:27.170 fused_ordering(515) 00:13:27.170 fused_ordering(516) 00:13:27.170 fused_ordering(517) 00:13:27.170 fused_ordering(518) 00:13:27.170 fused_ordering(519) 00:13:27.170 fused_ordering(520) 00:13:27.170 fused_ordering(521) 00:13:27.170 fused_ordering(522) 00:13:27.170 fused_ordering(523) 00:13:27.170 fused_ordering(524) 00:13:27.170 fused_ordering(525) 00:13:27.170 fused_ordering(526) 00:13:27.170 fused_ordering(527) 00:13:27.170 fused_ordering(528) 00:13:27.170 fused_ordering(529) 00:13:27.170 fused_ordering(530) 00:13:27.170 fused_ordering(531) 00:13:27.170 fused_ordering(532) 00:13:27.170 fused_ordering(533) 00:13:27.170 fused_ordering(534) 00:13:27.170 fused_ordering(535) 00:13:27.170 fused_ordering(536) 00:13:27.170 fused_ordering(537) 00:13:27.170 fused_ordering(538) 00:13:27.170 fused_ordering(539) 00:13:27.170 fused_ordering(540) 00:13:27.170 fused_ordering(541) 00:13:27.170 fused_ordering(542) 00:13:27.170 fused_ordering(543) 00:13:27.170 fused_ordering(544) 00:13:27.170 fused_ordering(545) 00:13:27.170 fused_ordering(546) 00:13:27.170 fused_ordering(547) 00:13:27.170 fused_ordering(548) 00:13:27.170 fused_ordering(549) 00:13:27.170 fused_ordering(550) 00:13:27.170 fused_ordering(551) 00:13:27.170 fused_ordering(552) 00:13:27.170 fused_ordering(553) 00:13:27.170 fused_ordering(554) 00:13:27.170 fused_ordering(555) 00:13:27.170 fused_ordering(556) 00:13:27.170 fused_ordering(557) 00:13:27.170 fused_ordering(558) 00:13:27.170 fused_ordering(559) 00:13:27.170 fused_ordering(560) 00:13:27.170 fused_ordering(561) 00:13:27.170 fused_ordering(562) 00:13:27.170 fused_ordering(563) 00:13:27.170 fused_ordering(564) 00:13:27.170 fused_ordering(565) 00:13:27.171 fused_ordering(566) 00:13:27.171 fused_ordering(567) 00:13:27.171 fused_ordering(568) 00:13:27.171 fused_ordering(569) 00:13:27.171 fused_ordering(570) 00:13:27.171 fused_ordering(571) 00:13:27.171 fused_ordering(572) 00:13:27.171 fused_ordering(573) 00:13:27.171 fused_ordering(574) 00:13:27.171 fused_ordering(575) 00:13:27.171 fused_ordering(576) 00:13:27.171 fused_ordering(577) 00:13:27.171 fused_ordering(578) 00:13:27.171 fused_ordering(579) 00:13:27.171 fused_ordering(580) 00:13:27.171 fused_ordering(581) 00:13:27.171 fused_ordering(582) 00:13:27.171 fused_ordering(583) 00:13:27.171 fused_ordering(584) 00:13:27.171 fused_ordering(585) 00:13:27.171 fused_ordering(586) 00:13:27.171 fused_ordering(587) 00:13:27.171 fused_ordering(588) 00:13:27.171 fused_ordering(589) 00:13:27.171 fused_ordering(590) 00:13:27.171 fused_ordering(591) 00:13:27.171 fused_ordering(592) 00:13:27.171 fused_ordering(593) 00:13:27.171 fused_ordering(594) 00:13:27.171 fused_ordering(595) 00:13:27.171 fused_ordering(596) 00:13:27.171 fused_ordering(597) 00:13:27.171 fused_ordering(598) 00:13:27.171 fused_ordering(599) 00:13:27.171 fused_ordering(600) 00:13:27.171 fused_ordering(601) 00:13:27.171 fused_ordering(602) 00:13:27.171 fused_ordering(603) 00:13:27.171 fused_ordering(604) 00:13:27.171 fused_ordering(605) 00:13:27.171 fused_ordering(606) 00:13:27.171 fused_ordering(607) 00:13:27.171 fused_ordering(608) 00:13:27.171 fused_ordering(609) 00:13:27.171 fused_ordering(610) 00:13:27.171 fused_ordering(611) 00:13:27.171 fused_ordering(612) 00:13:27.171 fused_ordering(613) 00:13:27.171 fused_ordering(614) 00:13:27.171 fused_ordering(615) 00:13:27.429 fused_ordering(616) 00:13:27.429 fused_ordering(617) 00:13:27.429 fused_ordering(618) 00:13:27.429 fused_ordering(619) 00:13:27.429 fused_ordering(620) 00:13:27.429 fused_ordering(621) 00:13:27.429 fused_ordering(622) 00:13:27.429 fused_ordering(623) 00:13:27.429 fused_ordering(624) 00:13:27.429 fused_ordering(625) 00:13:27.429 fused_ordering(626) 00:13:27.429 fused_ordering(627) 00:13:27.429 fused_ordering(628) 00:13:27.429 fused_ordering(629) 00:13:27.429 fused_ordering(630) 00:13:27.429 fused_ordering(631) 00:13:27.429 fused_ordering(632) 00:13:27.429 fused_ordering(633) 00:13:27.429 fused_ordering(634) 00:13:27.429 fused_ordering(635) 00:13:27.429 fused_ordering(636) 00:13:27.429 fused_ordering(637) 00:13:27.429 fused_ordering(638) 00:13:27.429 fused_ordering(639) 00:13:27.429 fused_ordering(640) 00:13:27.429 fused_ordering(641) 00:13:27.429 fused_ordering(642) 00:13:27.429 fused_ordering(643) 00:13:27.429 fused_ordering(644) 00:13:27.429 fused_ordering(645) 00:13:27.429 fused_ordering(646) 00:13:27.429 fused_ordering(647) 00:13:27.429 fused_ordering(648) 00:13:27.429 fused_ordering(649) 00:13:27.429 fused_ordering(650) 00:13:27.429 fused_ordering(651) 00:13:27.429 fused_ordering(652) 00:13:27.429 fused_ordering(653) 00:13:27.429 fused_ordering(654) 00:13:27.429 fused_ordering(655) 00:13:27.429 fused_ordering(656) 00:13:27.429 fused_ordering(657) 00:13:27.429 fused_ordering(658) 00:13:27.429 fused_ordering(659) 00:13:27.429 fused_ordering(660) 00:13:27.429 fused_ordering(661) 00:13:27.429 fused_ordering(662) 00:13:27.429 fused_ordering(663) 00:13:27.429 fused_ordering(664) 00:13:27.429 fused_ordering(665) 00:13:27.429 fused_ordering(666) 00:13:27.429 fused_ordering(667) 00:13:27.429 fused_ordering(668) 00:13:27.429 fused_ordering(669) 00:13:27.429 fused_ordering(670) 00:13:27.429 fused_ordering(671) 00:13:27.429 fused_ordering(672) 00:13:27.429 fused_ordering(673) 00:13:27.429 fused_ordering(674) 00:13:27.429 fused_ordering(675) 00:13:27.429 fused_ordering(676) 00:13:27.429 fused_ordering(677) 00:13:27.429 fused_ordering(678) 00:13:27.429 fused_ordering(679) 00:13:27.429 fused_ordering(680) 00:13:27.429 fused_ordering(681) 00:13:27.429 fused_ordering(682) 00:13:27.429 fused_ordering(683) 00:13:27.429 fused_ordering(684) 00:13:27.429 fused_ordering(685) 00:13:27.429 fused_ordering(686) 00:13:27.429 fused_ordering(687) 00:13:27.429 fused_ordering(688) 00:13:27.429 fused_ordering(689) 00:13:27.429 fused_ordering(690) 00:13:27.429 fused_ordering(691) 00:13:27.429 fused_ordering(692) 00:13:27.429 fused_ordering(693) 00:13:27.429 fused_ordering(694) 00:13:27.429 fused_ordering(695) 00:13:27.429 fused_ordering(696) 00:13:27.429 fused_ordering(697) 00:13:27.429 fused_ordering(698) 00:13:27.429 fused_ordering(699) 00:13:27.429 fused_ordering(700) 00:13:27.429 fused_ordering(701) 00:13:27.429 fused_ordering(702) 00:13:27.429 fused_ordering(703) 00:13:27.429 fused_ordering(704) 00:13:27.429 fused_ordering(705) 00:13:27.429 fused_ordering(706) 00:13:27.429 fused_ordering(707) 00:13:27.429 fused_ordering(708) 00:13:27.429 fused_ordering(709) 00:13:27.429 fused_ordering(710) 00:13:27.429 fused_ordering(711) 00:13:27.429 fused_ordering(712) 00:13:27.429 fused_ordering(713) 00:13:27.429 fused_ordering(714) 00:13:27.429 fused_ordering(715) 00:13:27.429 fused_ordering(716) 00:13:27.429 fused_ordering(717) 00:13:27.429 fused_ordering(718) 00:13:27.429 fused_ordering(719) 00:13:27.429 fused_ordering(720) 00:13:27.429 fused_ordering(721) 00:13:27.429 fused_ordering(722) 00:13:27.429 fused_ordering(723) 00:13:27.429 fused_ordering(724) 00:13:27.429 fused_ordering(725) 00:13:27.429 fused_ordering(726) 00:13:27.429 fused_ordering(727) 00:13:27.429 fused_ordering(728) 00:13:27.429 fused_ordering(729) 00:13:27.429 fused_ordering(730) 00:13:27.429 fused_ordering(731) 00:13:27.429 fused_ordering(732) 00:13:27.429 fused_ordering(733) 00:13:27.429 fused_ordering(734) 00:13:27.429 fused_ordering(735) 00:13:27.429 fused_ordering(736) 00:13:27.429 fused_ordering(737) 00:13:27.430 fused_ordering(738) 00:13:27.430 fused_ordering(739) 00:13:27.430 fused_ordering(740) 00:13:27.430 fused_ordering(741) 00:13:27.430 fused_ordering(742) 00:13:27.430 fused_ordering(743) 00:13:27.430 fused_ordering(744) 00:13:27.430 fused_ordering(745) 00:13:27.430 fused_ordering(746) 00:13:27.430 fused_ordering(747) 00:13:27.430 fused_ordering(748) 00:13:27.430 fused_ordering(749) 00:13:27.430 fused_ordering(750) 00:13:27.430 fused_ordering(751) 00:13:27.430 fused_ordering(752) 00:13:27.430 fused_ordering(753) 00:13:27.430 fused_ordering(754) 00:13:27.430 fused_ordering(755) 00:13:27.430 fused_ordering(756) 00:13:27.430 fused_ordering(757) 00:13:27.430 fused_ordering(758) 00:13:27.430 fused_ordering(759) 00:13:27.430 fused_ordering(760) 00:13:27.430 fused_ordering(761) 00:13:27.430 fused_ordering(762) 00:13:27.430 fused_ordering(763) 00:13:27.430 fused_ordering(764) 00:13:27.430 fused_ordering(765) 00:13:27.430 fused_ordering(766) 00:13:27.430 fused_ordering(767) 00:13:27.430 fused_ordering(768) 00:13:27.430 fused_ordering(769) 00:13:27.430 fused_ordering(770) 00:13:27.430 fused_ordering(771) 00:13:27.430 fused_ordering(772) 00:13:27.430 fused_ordering(773) 00:13:27.430 fused_ordering(774) 00:13:27.430 fused_ordering(775) 00:13:27.430 fused_ordering(776) 00:13:27.430 fused_ordering(777) 00:13:27.430 fused_ordering(778) 00:13:27.430 fused_ordering(779) 00:13:27.430 fused_ordering(780) 00:13:27.430 fused_ordering(781) 00:13:27.430 fused_ordering(782) 00:13:27.430 fused_ordering(783) 00:13:27.430 fused_ordering(784) 00:13:27.430 fused_ordering(785) 00:13:27.430 fused_ordering(786) 00:13:27.430 fused_ordering(787) 00:13:27.430 fused_ordering(788) 00:13:27.430 fused_ordering(789) 00:13:27.430 fused_ordering(790) 00:13:27.430 fused_ordering(791) 00:13:27.430 fused_ordering(792) 00:13:27.430 fused_ordering(793) 00:13:27.430 fused_ordering(794) 00:13:27.430 fused_ordering(795) 00:13:27.430 fused_ordering(796) 00:13:27.430 fused_ordering(797) 00:13:27.430 fused_ordering(798) 00:13:27.430 fused_ordering(799) 00:13:27.430 fused_ordering(800) 00:13:27.430 fused_ordering(801) 00:13:27.430 fused_ordering(802) 00:13:27.430 fused_ordering(803) 00:13:27.430 fused_ordering(804) 00:13:27.430 fused_ordering(805) 00:13:27.430 fused_ordering(806) 00:13:27.430 fused_ordering(807) 00:13:27.430 fused_ordering(808) 00:13:27.430 fused_ordering(809) 00:13:27.430 fused_ordering(810) 00:13:27.430 fused_ordering(811) 00:13:27.430 fused_ordering(812) 00:13:27.430 fused_ordering(813) 00:13:27.430 fused_ordering(814) 00:13:27.430 fused_ordering(815) 00:13:27.430 fused_ordering(816) 00:13:27.430 fused_ordering(817) 00:13:27.430 fused_ordering(818) 00:13:27.430 fused_ordering(819) 00:13:27.430 fused_ordering(820) 00:13:28.010 fused_ordering(821) 00:13:28.010 fused_ordering(822) 00:13:28.010 fused_ordering(823) 00:13:28.010 fused_ordering(824) 00:13:28.010 fused_ordering(825) 00:13:28.010 fused_ordering(826) 00:13:28.010 fused_ordering(827) 00:13:28.010 fused_ordering(828) 00:13:28.010 fused_ordering(829) 00:13:28.010 fused_ordering(830) 00:13:28.010 fused_ordering(831) 00:13:28.010 fused_ordering(832) 00:13:28.010 fused_ordering(833) 00:13:28.010 fused_ordering(834) 00:13:28.010 fused_ordering(835) 00:13:28.010 fused_ordering(836) 00:13:28.010 fused_ordering(837) 00:13:28.010 fused_ordering(838) 00:13:28.010 fused_ordering(839) 00:13:28.010 fused_ordering(840) 00:13:28.010 fused_ordering(841) 00:13:28.010 fused_ordering(842) 00:13:28.010 fused_ordering(843) 00:13:28.010 fused_ordering(844) 00:13:28.010 fused_ordering(845) 00:13:28.010 fused_ordering(846) 00:13:28.010 fused_ordering(847) 00:13:28.010 fused_ordering(848) 00:13:28.010 fused_ordering(849) 00:13:28.011 fused_ordering(850) 00:13:28.011 fused_ordering(851) 00:13:28.011 fused_ordering(852) 00:13:28.011 fused_ordering(853) 00:13:28.011 fused_ordering(854) 00:13:28.011 fused_ordering(855) 00:13:28.011 fused_ordering(856) 00:13:28.011 fused_ordering(857) 00:13:28.011 fused_ordering(858) 00:13:28.011 fused_ordering(859) 00:13:28.011 fused_ordering(860) 00:13:28.011 fused_ordering(861) 00:13:28.011 fused_ordering(862) 00:13:28.011 fused_ordering(863) 00:13:28.011 fused_ordering(864) 00:13:28.011 fused_ordering(865) 00:13:28.011 fused_ordering(866) 00:13:28.011 fused_ordering(867) 00:13:28.011 fused_ordering(868) 00:13:28.011 fused_ordering(869) 00:13:28.011 fused_ordering(870) 00:13:28.011 fused_ordering(871) 00:13:28.011 fused_ordering(872) 00:13:28.011 fused_ordering(873) 00:13:28.011 fused_ordering(874) 00:13:28.011 fused_ordering(875) 00:13:28.011 fused_ordering(876) 00:13:28.011 fused_ordering(877) 00:13:28.011 fused_ordering(878) 00:13:28.011 fused_ordering(879) 00:13:28.011 fused_ordering(880) 00:13:28.011 fused_ordering(881) 00:13:28.011 fused_ordering(882) 00:13:28.011 fused_ordering(883) 00:13:28.011 fused_ordering(884) 00:13:28.011 fused_ordering(885) 00:13:28.011 fused_ordering(886) 00:13:28.011 fused_ordering(887) 00:13:28.011 fused_ordering(888) 00:13:28.011 fused_ordering(889) 00:13:28.011 fused_ordering(890) 00:13:28.011 fused_ordering(891) 00:13:28.011 fused_ordering(892) 00:13:28.011 fused_ordering(893) 00:13:28.011 fused_ordering(894) 00:13:28.011 fused_ordering(895) 00:13:28.011 fused_ordering(896) 00:13:28.011 fused_ordering(897) 00:13:28.011 fused_ordering(898) 00:13:28.011 fused_ordering(899) 00:13:28.011 fused_ordering(900) 00:13:28.011 fused_ordering(901) 00:13:28.011 fused_ordering(902) 00:13:28.011 fused_ordering(903) 00:13:28.011 fused_ordering(904) 00:13:28.011 fused_ordering(905) 00:13:28.011 fused_ordering(906) 00:13:28.011 fused_ordering(907) 00:13:28.011 fused_ordering(908) 00:13:28.011 fused_ordering(909) 00:13:28.011 fused_ordering(910) 00:13:28.011 fused_ordering(911) 00:13:28.011 fused_ordering(912) 00:13:28.011 fused_ordering(913) 00:13:28.011 fused_ordering(914) 00:13:28.011 fused_ordering(915) 00:13:28.011 fused_ordering(916) 00:13:28.011 fused_ordering(917) 00:13:28.011 fused_ordering(918) 00:13:28.011 fused_ordering(919) 00:13:28.011 fused_ordering(920) 00:13:28.011 fused_ordering(921) 00:13:28.011 fused_ordering(922) 00:13:28.011 fused_ordering(923) 00:13:28.011 fused_ordering(924) 00:13:28.011 fused_ordering(925) 00:13:28.011 fused_ordering(926) 00:13:28.011 fused_ordering(927) 00:13:28.011 fused_ordering(928) 00:13:28.011 fused_ordering(929) 00:13:28.011 fused_ordering(930) 00:13:28.011 fused_ordering(931) 00:13:28.011 fused_ordering(932) 00:13:28.011 fused_ordering(933) 00:13:28.011 fused_ordering(934) 00:13:28.011 fused_ordering(935) 00:13:28.011 fused_ordering(936) 00:13:28.011 fused_ordering(937) 00:13:28.011 fused_ordering(938) 00:13:28.011 fused_ordering(939) 00:13:28.011 fused_ordering(940) 00:13:28.011 fused_ordering(941) 00:13:28.011 fused_ordering(942) 00:13:28.011 fused_ordering(943) 00:13:28.011 fused_ordering(944) 00:13:28.011 fused_ordering(945) 00:13:28.011 fused_ordering(946) 00:13:28.011 fused_ordering(947) 00:13:28.011 fused_ordering(948) 00:13:28.011 fused_ordering(949) 00:13:28.011 fused_ordering(950) 00:13:28.011 fused_ordering(951) 00:13:28.011 fused_ordering(952) 00:13:28.011 fused_ordering(953) 00:13:28.011 fused_ordering(954) 00:13:28.011 fused_ordering(955) 00:13:28.011 fused_ordering(956) 00:13:28.011 fused_ordering(957) 00:13:28.011 fused_ordering(958) 00:13:28.011 fused_ordering(959) 00:13:28.011 fused_ordering(960) 00:13:28.011 fused_ordering(961) 00:13:28.011 fused_ordering(962) 00:13:28.011 fused_ordering(963) 00:13:28.011 fused_ordering(964) 00:13:28.011 fused_ordering(965) 00:13:28.011 fused_ordering(966) 00:13:28.011 fused_ordering(967) 00:13:28.011 fused_ordering(968) 00:13:28.011 fused_ordering(969) 00:13:28.011 fused_ordering(970) 00:13:28.011 fused_ordering(971) 00:13:28.011 fused_ordering(972) 00:13:28.011 fused_ordering(973) 00:13:28.011 fused_ordering(974) 00:13:28.011 fused_ordering(975) 00:13:28.011 fused_ordering(976) 00:13:28.011 fused_ordering(977) 00:13:28.011 fused_ordering(978) 00:13:28.011 fused_ordering(979) 00:13:28.011 fused_ordering(980) 00:13:28.011 fused_ordering(981) 00:13:28.011 fused_ordering(982) 00:13:28.011 fused_ordering(983) 00:13:28.011 fused_ordering(984) 00:13:28.011 fused_ordering(985) 00:13:28.011 fused_ordering(986) 00:13:28.011 fused_ordering(987) 00:13:28.011 fused_ordering(988) 00:13:28.011 fused_ordering(989) 00:13:28.011 fused_ordering(990) 00:13:28.011 fused_ordering(991) 00:13:28.011 fused_ordering(992) 00:13:28.011 fused_ordering(993) 00:13:28.011 fused_ordering(994) 00:13:28.011 fused_ordering(995) 00:13:28.011 fused_ordering(996) 00:13:28.011 fused_ordering(997) 00:13:28.011 fused_ordering(998) 00:13:28.011 fused_ordering(999) 00:13:28.011 fused_ordering(1000) 00:13:28.011 fused_ordering(1001) 00:13:28.011 fused_ordering(1002) 00:13:28.011 fused_ordering(1003) 00:13:28.011 fused_ordering(1004) 00:13:28.011 fused_ordering(1005) 00:13:28.011 fused_ordering(1006) 00:13:28.011 fused_ordering(1007) 00:13:28.011 fused_ordering(1008) 00:13:28.011 fused_ordering(1009) 00:13:28.011 fused_ordering(1010) 00:13:28.011 fused_ordering(1011) 00:13:28.011 fused_ordering(1012) 00:13:28.011 fused_ordering(1013) 00:13:28.011 fused_ordering(1014) 00:13:28.011 fused_ordering(1015) 00:13:28.011 fused_ordering(1016) 00:13:28.011 fused_ordering(1017) 00:13:28.011 fused_ordering(1018) 00:13:28.011 fused_ordering(1019) 00:13:28.011 fused_ordering(1020) 00:13:28.011 fused_ordering(1021) 00:13:28.011 fused_ordering(1022) 00:13:28.011 fused_ordering(1023) 00:13:28.011 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:28.011 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:28.011 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:28.011 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:13:28.011 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:28.011 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:13:28.011 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:28.011 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:28.269 rmmod nvme_tcp 00:13:28.269 rmmod nvme_fabrics 00:13:28.269 rmmod nvme_keyring 00:13:28.269 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:28.269 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:13:28.269 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:13:28.269 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 75022 ']' 00:13:28.269 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 75022 00:13:28.269 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 75022 ']' 00:13:28.269 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 75022 00:13:28.269 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:13:28.269 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:28.269 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75022 00:13:28.269 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:28.269 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:28.269 killing process with pid 75022 00:13:28.269 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75022' 00:13:28.269 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 75022 00:13:28.269 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 75022 00:13:28.269 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:28.269 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:28.269 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:28.269 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:13:28.269 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:13:28.269 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:28.269 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:13:28.269 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:28.269 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:28.269 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:28.528 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:28.528 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:28.528 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:28.528 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:28.528 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:28.528 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:28.528 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:28.528 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:28.528 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:28.528 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:28.528 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:28.528 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:28.528 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:28.528 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:28.528 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:28.528 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:28.528 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@300 -- # return 0 00:13:28.528 00:13:28.528 real 0m3.712s 00:13:28.528 user 0m4.293s 00:13:28.528 sys 0m1.341s 00:13:28.528 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:28.528 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:28.528 ************************************ 00:13:28.528 END TEST nvmf_fused_ordering 00:13:28.528 ************************************ 00:13:28.528 11:26:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:13:28.528 11:26:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:28.528 11:26:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:28.528 11:26:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:28.787 ************************************ 00:13:28.787 START TEST nvmf_ns_masking 00:13:28.787 ************************************ 00:13:28.787 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:13:28.787 * Looking for test storage... 00:13:28.787 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:28.787 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:28.787 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:13:28.787 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:28.787 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:28.787 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:28.787 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:28.787 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:28.787 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:13:28.787 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:13:28.787 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:13:28.787 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:13:28.787 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:13:28.787 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:13:28.787 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:13:28.788 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:28.788 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:13:28.788 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:13:28.788 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:28.788 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:28.788 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:13:28.788 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:13:28.788 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:28.788 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:13:28.788 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:13:28.788 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:13:28.788 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:13:28.788 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:28.788 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:13:28.788 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:13:28.788 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:28.788 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:28.788 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:13:28.788 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:28.788 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:28.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:28.788 --rc genhtml_branch_coverage=1 00:13:28.788 --rc genhtml_function_coverage=1 00:13:28.788 --rc genhtml_legend=1 00:13:28.788 --rc geninfo_all_blocks=1 00:13:28.788 --rc geninfo_unexecuted_blocks=1 00:13:28.788 00:13:28.788 ' 00:13:28.788 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:28.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:28.788 --rc genhtml_branch_coverage=1 00:13:28.788 --rc genhtml_function_coverage=1 00:13:28.788 --rc genhtml_legend=1 00:13:28.788 --rc geninfo_all_blocks=1 00:13:28.788 --rc geninfo_unexecuted_blocks=1 00:13:28.788 00:13:28.788 ' 00:13:28.788 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:28.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:28.788 --rc genhtml_branch_coverage=1 00:13:28.788 --rc genhtml_function_coverage=1 00:13:28.788 --rc genhtml_legend=1 00:13:28.788 --rc geninfo_all_blocks=1 00:13:28.788 --rc geninfo_unexecuted_blocks=1 00:13:28.788 00:13:28.788 ' 00:13:28.788 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:28.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:28.788 --rc genhtml_branch_coverage=1 00:13:28.788 --rc genhtml_function_coverage=1 00:13:28.788 --rc genhtml_legend=1 00:13:28.788 --rc geninfo_all_blocks=1 00:13:28.788 --rc geninfo_unexecuted_blocks=1 00:13:28.788 00:13:28.788 ' 00:13:28.788 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:28.788 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:13:28.788 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:28.788 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:28.788 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:28.788 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:28.788 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:28.788 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:28.788 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:28.788 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:28.788 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:28.788 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:28.788 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:13:28.788 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=806d442c-08ba-4719-b76d-33169283459a 00:13:28.788 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:28.788 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:28.788 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:28.788 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:28.788 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:28.788 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:13:28.788 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:28.788 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:28.788 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:28.788 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.788 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.788 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.788 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:13:28.788 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.788 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:13:28.788 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:28.788 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:28.788 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:28.788 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:28.788 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:28.788 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:28.788 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:28.788 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:28.788 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:28.788 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:28.788 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:28.788 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:13:28.788 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:13:28.788 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:13:28.788 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=f22d3553-61aa-4b82-aaab-125389a79ea6 00:13:28.788 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:13:28.788 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=36500bb8-40df-414f-89ef-98c18ad489bf 00:13:28.788 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:13:28.788 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:13:28.788 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:13:28.788 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:13:28.788 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=10baaf47-2cd1-489c-8144-88cd6b551ee3 00:13:28.788 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:13:28.788 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:28.789 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:28.789 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:28.789 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:28.789 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:28.789 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:28.789 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:28.789 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:28.789 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:28.789 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:28.789 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:28.789 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:28.789 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:28.789 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:28.789 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:28.789 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:28.789 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:28.789 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:28.789 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:28.789 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:28.789 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:28.789 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:28.789 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:28.789 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:28.789 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:28.789 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:28.789 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:28.789 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:28.789 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:28.789 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:28.789 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:28.789 Cannot find device "nvmf_init_br" 00:13:28.789 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@162 -- # true 00:13:28.789 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:28.789 Cannot find device "nvmf_init_br2" 00:13:28.789 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@163 -- # true 00:13:28.789 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:28.789 Cannot find device "nvmf_tgt_br" 00:13:29.048 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@164 -- # true 00:13:29.048 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:29.048 Cannot find device "nvmf_tgt_br2" 00:13:29.048 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@165 -- # true 00:13:29.048 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:29.048 Cannot find device "nvmf_init_br" 00:13:29.048 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@166 -- # true 00:13:29.048 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:29.048 Cannot find device "nvmf_init_br2" 00:13:29.048 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@167 -- # true 00:13:29.048 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:29.048 Cannot find device "nvmf_tgt_br" 00:13:29.048 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@168 -- # true 00:13:29.048 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:29.048 Cannot find device "nvmf_tgt_br2" 00:13:29.048 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@169 -- # true 00:13:29.048 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:29.048 Cannot find device "nvmf_br" 00:13:29.048 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@170 -- # true 00:13:29.048 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:29.048 Cannot find device "nvmf_init_if" 00:13:29.048 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@171 -- # true 00:13:29.048 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:29.048 Cannot find device "nvmf_init_if2" 00:13:29.048 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@172 -- # true 00:13:29.048 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:29.048 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:29.048 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@173 -- # true 00:13:29.048 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:29.049 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:29.049 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@174 -- # true 00:13:29.049 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:29.049 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:29.049 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:29.049 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:29.049 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:29.049 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:29.049 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:29.049 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:29.049 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:29.049 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:29.049 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:29.049 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:29.049 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:29.049 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:29.049 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:29.049 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:29.049 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:29.049 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:29.049 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:29.049 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:29.308 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:29.308 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:29.308 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:29.308 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:29.308 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:29.308 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:29.308 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:29.308 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:29.308 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:29.308 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:29.308 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:29.308 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:29.308 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:29.308 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:29.308 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:13:29.308 00:13:29.308 --- 10.0.0.3 ping statistics --- 00:13:29.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:29.308 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:13:29.308 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:29.308 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:29.308 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 00:13:29.308 00:13:29.308 --- 10.0.0.4 ping statistics --- 00:13:29.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:29.308 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:13:29.308 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:29.308 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:29.308 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:13:29.308 00:13:29.308 --- 10.0.0.1 ping statistics --- 00:13:29.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:29.308 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:13:29.308 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:29.308 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:29.308 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:13:29.308 00:13:29.308 --- 10.0.0.2 ping statistics --- 00:13:29.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:29.308 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:13:29.308 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:29.308 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@461 -- # return 0 00:13:29.308 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:29.308 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:29.308 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:29.308 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:29.308 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:29.308 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:29.308 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:29.308 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:13:29.308 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:29.308 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:29.308 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:29.308 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:29.308 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=75305 00:13:29.308 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 75305 00:13:29.308 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 75305 ']' 00:13:29.308 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:29.308 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:29.308 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:29.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:29.308 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:29.308 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:29.308 [2024-11-20 11:26:14.823541] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:13:29.308 [2024-11-20 11:26:14.823645] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:29.567 [2024-11-20 11:26:14.974045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:29.567 [2024-11-20 11:26:15.011264] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:29.567 [2024-11-20 11:26:15.011324] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:29.567 [2024-11-20 11:26:15.011337] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:29.567 [2024-11-20 11:26:15.011347] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:29.567 [2024-11-20 11:26:15.011356] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:29.567 [2024-11-20 11:26:15.011744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.567 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:29.567 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:13:29.567 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:29.567 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:29.567 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:29.826 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:29.826 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:29.826 [2024-11-20 11:26:15.410286] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:30.083 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:13:30.083 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:13:30.084 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:30.341 Malloc1 00:13:30.341 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:30.599 Malloc2 00:13:30.599 11:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:30.857 11:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:13:31.115 11:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:31.373 [2024-11-20 11:26:16.943934] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:31.632 11:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:13:31.632 11:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 10baaf47-2cd1-489c-8144-88cd6b551ee3 -a 10.0.0.3 -s 4420 -i 4 00:13:31.632 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:13:31.632 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:31.632 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:31.632 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:31.632 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:33.551 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:33.551 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:33.551 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:33.551 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:33.551 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:33.551 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:33.551 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:33.551 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:33.840 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:33.840 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:33.840 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:13:33.840 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:33.840 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:33.840 [ 0]:0x1 00:13:33.840 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:33.840 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:33.840 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=690cc6ae4ba442f8afd7a8010c091291 00:13:33.840 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 690cc6ae4ba442f8afd7a8010c091291 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:33.840 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:13:34.099 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:13:34.099 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:34.099 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:34.099 [ 0]:0x1 00:13:34.099 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:34.099 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:34.099 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=690cc6ae4ba442f8afd7a8010c091291 00:13:34.099 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 690cc6ae4ba442f8afd7a8010c091291 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:34.099 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:13:34.099 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:34.099 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:34.099 [ 1]:0x2 00:13:34.099 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:34.099 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:34.099 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9e5ddbdbdd184b989059b694f9da70ff 00:13:34.099 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9e5ddbdbdd184b989059b694f9da70ff != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:34.099 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:13:34.099 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:34.357 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:34.357 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:34.615 11:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:13:34.873 11:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:13:34.873 11:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 10baaf47-2cd1-489c-8144-88cd6b551ee3 -a 10.0.0.3 -s 4420 -i 4 00:13:35.132 11:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:13:35.132 11:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:35.132 11:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:35.132 11:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:13:35.132 11:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:13:35.132 11:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:37.034 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:37.034 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:37.034 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:37.034 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:37.034 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:37.034 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:37.034 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:37.034 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:37.034 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:37.034 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:37.034 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:13:37.034 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:37.034 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:37.034 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:37.034 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:37.034 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:37.034 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:37.034 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:37.034 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:37.034 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:37.034 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:37.034 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:37.293 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:37.293 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:37.293 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:37.293 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:37.293 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:37.293 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:37.293 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:13:37.293 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:37.293 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:37.293 [ 0]:0x2 00:13:37.293 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:37.293 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:37.293 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9e5ddbdbdd184b989059b694f9da70ff 00:13:37.293 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9e5ddbdbdd184b989059b694f9da70ff != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:37.293 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:37.552 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:13:37.552 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:37.552 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:37.552 [ 0]:0x1 00:13:37.552 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:37.552 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:37.552 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=690cc6ae4ba442f8afd7a8010c091291 00:13:37.552 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 690cc6ae4ba442f8afd7a8010c091291 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:37.552 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:13:37.552 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:37.552 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:37.552 [ 1]:0x2 00:13:37.552 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:37.552 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:37.552 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9e5ddbdbdd184b989059b694f9da70ff 00:13:37.552 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9e5ddbdbdd184b989059b694f9da70ff != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:37.552 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:38.118 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:13:38.118 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:38.118 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:38.118 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:38.118 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:38.118 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:38.118 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:38.118 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:38.118 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:38.118 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:38.118 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:38.118 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:38.118 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:38.118 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:38.118 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:38.118 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:38.118 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:38.118 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:38.118 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:13:38.118 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:38.118 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:38.118 [ 0]:0x2 00:13:38.118 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:38.118 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:38.118 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9e5ddbdbdd184b989059b694f9da70ff 00:13:38.118 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9e5ddbdbdd184b989059b694f9da70ff != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:38.118 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:13:38.118 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:38.118 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:38.118 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:38.375 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:13:38.375 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 10baaf47-2cd1-489c-8144-88cd6b551ee3 -a 10.0.0.3 -s 4420 -i 4 00:13:38.633 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:38.633 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:38.633 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:38.633 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:13:38.633 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:13:38.633 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:40.531 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:40.531 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:40.531 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:40.531 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:13:40.531 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:40.531 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:40.531 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:40.531 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:40.531 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:40.531 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:40.531 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:13:40.531 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:40.531 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:40.531 [ 0]:0x1 00:13:40.531 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:40.531 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:40.787 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=690cc6ae4ba442f8afd7a8010c091291 00:13:40.787 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 690cc6ae4ba442f8afd7a8010c091291 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:40.788 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:13:40.788 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:40.788 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:40.788 [ 1]:0x2 00:13:40.788 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:40.788 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:40.788 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9e5ddbdbdd184b989059b694f9da70ff 00:13:40.788 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9e5ddbdbdd184b989059b694f9da70ff != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:40.788 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:41.045 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:13:41.045 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:41.045 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:41.045 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:41.045 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:41.045 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:41.045 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:41.046 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:41.046 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:41.046 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:41.046 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:41.046 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:41.046 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:41.046 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:41.046 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:41.046 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:41.046 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:41.046 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:41.046 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:13:41.046 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:41.046 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:41.046 [ 0]:0x2 00:13:41.046 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:41.046 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:41.304 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9e5ddbdbdd184b989059b694f9da70ff 00:13:41.304 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9e5ddbdbdd184b989059b694f9da70ff != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:41.304 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:41.304 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:41.304 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:41.304 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:41.304 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:41.304 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:41.304 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:41.304 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:41.304 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:41.304 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:41.304 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:13:41.304 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:41.562 [2024-11-20 11:26:26.918234] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:13:41.562 2024/11/20 11:26:26 error on JSON-RPC call, method: nvmf_ns_remove_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 nsid:2], err: error received for nvmf_ns_remove_host method, err: Code=-32602 Msg=Invalid parameters 00:13:41.562 request: 00:13:41.562 { 00:13:41.562 "method": "nvmf_ns_remove_host", 00:13:41.562 "params": { 00:13:41.562 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:41.562 "nsid": 2, 00:13:41.562 "host": "nqn.2016-06.io.spdk:host1" 00:13:41.562 } 00:13:41.562 } 00:13:41.562 Got JSON-RPC error response 00:13:41.562 GoRPCClient: error on JSON-RPC call 00:13:41.562 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:41.562 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:41.562 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:41.562 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:41.562 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:13:41.562 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:41.562 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:41.562 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:41.562 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:41.562 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:41.562 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:41.562 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:41.562 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:41.562 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:41.562 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:41.562 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:41.562 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:41.562 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:41.562 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:41.562 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:41.562 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:41.562 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:41.562 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:13:41.562 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:41.562 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:41.562 [ 0]:0x2 00:13:41.562 11:26:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:41.562 11:26:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:41.562 11:26:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9e5ddbdbdd184b989059b694f9da70ff 00:13:41.562 11:26:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9e5ddbdbdd184b989059b694f9da70ff != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:41.562 11:26:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:13:41.562 11:26:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:41.562 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:41.563 11:26:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=75674 00:13:41.563 11:26:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:13:41.563 11:26:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:13:41.563 11:26:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 75674 /var/tmp/host.sock 00:13:41.563 11:26:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 75674 ']' 00:13:41.563 11:26:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:13:41.563 11:26:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:41.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:41.563 11:26:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:41.563 11:26:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:41.563 11:26:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:41.822 [2024-11-20 11:26:27.168524] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:13:41.822 [2024-11-20 11:26:27.168649] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75674 ] 00:13:41.822 [2024-11-20 11:26:27.317540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:41.822 [2024-11-20 11:26:27.356090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:42.080 11:26:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:42.080 11:26:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:13:42.080 11:26:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:42.339 11:26:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:42.597 11:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid f22d3553-61aa-4b82-aaab-125389a79ea6 00:13:42.597 11:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:42.597 11:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g F22D355361AA4B82AAAB125389A79EA6 -i 00:13:42.856 11:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 36500bb8-40df-414f-89ef-98c18ad489bf 00:13:42.856 11:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:42.856 11:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 36500BB840DF414F89EF98C18AD489BF -i 00:13:43.425 11:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:43.425 11:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:13:43.991 11:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:43.991 11:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:44.249 nvme0n1 00:13:44.249 11:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:44.249 11:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:44.507 nvme1n2 00:13:44.507 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:13:44.507 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:44.507 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:13:44.507 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:13:44.507 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:13:44.765 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:13:44.765 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:13:44.765 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:13:44.765 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:13:45.337 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ f22d3553-61aa-4b82-aaab-125389a79ea6 == \f\2\2\d\3\5\5\3\-\6\1\a\a\-\4\b\8\2\-\a\a\a\b\-\1\2\5\3\8\9\a\7\9\e\a\6 ]] 00:13:45.337 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:13:45.337 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:13:45.337 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:13:45.611 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 36500bb8-40df-414f-89ef-98c18ad489bf == \3\6\5\0\0\b\b\8\-\4\0\d\f\-\4\1\4\f\-\8\9\e\f\-\9\8\c\1\8\a\d\4\8\9\b\f ]] 00:13:45.611 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:45.870 11:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:46.438 11:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid f22d3553-61aa-4b82-aaab-125389a79ea6 00:13:46.438 11:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:46.438 11:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g F22D355361AA4B82AAAB125389A79EA6 00:13:46.438 11:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:46.438 11:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g F22D355361AA4B82AAAB125389A79EA6 00:13:46.438 11:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:46.438 11:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:46.438 11:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:46.438 11:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:46.438 11:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:46.438 11:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:46.438 11:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:46.438 11:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:13:46.438 11:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g F22D355361AA4B82AAAB125389A79EA6 00:13:46.697 [2024-11-20 11:26:32.076446] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:13:46.697 [2024-11-20 11:26:32.076495] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:13:46.697 [2024-11-20 11:26:32.076508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:46.698 2024/11/20 11:26:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:invalid nguid:F22D355361AA4B82AAAB125389A79EA6 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:46.698 request: 00:13:46.698 { 00:13:46.698 "method": "nvmf_subsystem_add_ns", 00:13:46.698 "params": { 00:13:46.698 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:46.698 "namespace": { 00:13:46.698 "bdev_name": "invalid", 00:13:46.698 "nsid": 1, 00:13:46.698 "nguid": "F22D355361AA4B82AAAB125389A79EA6", 00:13:46.698 "no_auto_visible": false 00:13:46.698 } 00:13:46.698 } 00:13:46.698 } 00:13:46.698 Got JSON-RPC error response 00:13:46.698 GoRPCClient: error on JSON-RPC call 00:13:46.698 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:46.698 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:46.698 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:46.698 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:46.698 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid f22d3553-61aa-4b82-aaab-125389a79ea6 00:13:46.698 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:46.698 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g F22D355361AA4B82AAAB125389A79EA6 -i 00:13:46.956 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:13:48.856 11:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:13:48.856 11:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:13:48.856 11:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:49.422 11:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:13:49.422 11:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 75674 00:13:49.423 11:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 75674 ']' 00:13:49.423 11:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 75674 00:13:49.423 11:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:13:49.423 11:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:49.423 11:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75674 00:13:49.423 killing process with pid 75674 00:13:49.423 11:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:49.423 11:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:49.423 11:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75674' 00:13:49.423 11:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 75674 00:13:49.423 11:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 75674 00:13:49.680 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:49.939 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:13:49.939 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:13:49.939 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:49.939 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:13:49.939 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:49.939 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:13:49.939 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:49.939 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:49.939 rmmod nvme_tcp 00:13:49.939 rmmod nvme_fabrics 00:13:49.939 rmmod nvme_keyring 00:13:49.939 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:49.939 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:13:49.939 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:13:49.939 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 75305 ']' 00:13:49.939 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 75305 00:13:49.939 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 75305 ']' 00:13:49.939 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 75305 00:13:49.939 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:13:49.939 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:49.939 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75305 00:13:49.939 killing process with pid 75305 00:13:49.939 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:49.939 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:49.939 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75305' 00:13:49.939 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 75305 00:13:49.939 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 75305 00:13:50.198 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:50.198 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:50.198 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:50.198 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:13:50.198 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:13:50.198 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:50.198 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:13:50.198 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:50.198 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:50.198 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:50.198 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:50.198 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:50.198 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:50.198 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:50.198 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:50.198 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:50.198 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:50.198 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:50.198 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:50.198 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:50.456 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:50.456 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:50.456 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:50.456 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:50.456 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:50.456 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:50.456 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@300 -- # return 0 00:13:50.456 00:13:50.456 real 0m21.759s 00:13:50.456 user 0m37.766s 00:13:50.456 sys 0m3.047s 00:13:50.456 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:50.456 ************************************ 00:13:50.456 END TEST nvmf_ns_masking 00:13:50.456 ************************************ 00:13:50.456 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:50.456 11:26:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 0 -eq 1 ]] 00:13:50.456 11:26:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]] 00:13:50.456 11:26:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:13:50.456 11:26:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:50.456 11:26:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:50.456 11:26:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:50.456 ************************************ 00:13:50.456 START TEST nvmf_auth_target 00:13:50.456 ************************************ 00:13:50.456 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:13:50.456 * Looking for test storage... 00:13:50.456 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:50.456 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:50.456 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:50.456 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:13:50.716 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:50.716 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:50.716 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:50.716 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:50.716 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:13:50.716 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:13:50.716 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:13:50.716 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:13:50.716 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:13:50.716 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:13:50.716 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:13:50.716 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:50.716 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:13:50.716 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:13:50.716 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:50.716 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:50.716 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:13:50.716 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:13:50.716 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:50.716 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:13:50.716 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:13:50.716 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:13:50.716 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:13:50.716 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:50.716 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:13:50.716 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:13:50.716 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:50.716 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:50.716 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:13:50.716 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:50.716 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:50.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:50.716 --rc genhtml_branch_coverage=1 00:13:50.716 --rc genhtml_function_coverage=1 00:13:50.716 --rc genhtml_legend=1 00:13:50.716 --rc geninfo_all_blocks=1 00:13:50.716 --rc geninfo_unexecuted_blocks=1 00:13:50.716 00:13:50.716 ' 00:13:50.716 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:50.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:50.716 --rc genhtml_branch_coverage=1 00:13:50.716 --rc genhtml_function_coverage=1 00:13:50.716 --rc genhtml_legend=1 00:13:50.716 --rc geninfo_all_blocks=1 00:13:50.716 --rc geninfo_unexecuted_blocks=1 00:13:50.716 00:13:50.716 ' 00:13:50.716 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:50.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:50.716 --rc genhtml_branch_coverage=1 00:13:50.716 --rc genhtml_function_coverage=1 00:13:50.716 --rc genhtml_legend=1 00:13:50.716 --rc geninfo_all_blocks=1 00:13:50.716 --rc geninfo_unexecuted_blocks=1 00:13:50.716 00:13:50.716 ' 00:13:50.716 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:50.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:50.716 --rc genhtml_branch_coverage=1 00:13:50.716 --rc genhtml_function_coverage=1 00:13:50.716 --rc genhtml_legend=1 00:13:50.716 --rc geninfo_all_blocks=1 00:13:50.716 --rc geninfo_unexecuted_blocks=1 00:13:50.716 00:13:50.716 ' 00:13:50.716 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:50.716 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:13:50.716 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:50.716 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:50.716 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:50.716 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:50.716 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:50.716 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:50.716 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:50.716 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:50.716 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:50.716 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:50.716 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:13:50.716 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=806d442c-08ba-4719-b76d-33169283459a 00:13:50.716 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:50.716 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:50.716 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:50.716 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:50.716 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:50.716 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:13:50.716 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:50.716 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:50.716 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:50.716 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.716 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.717 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.717 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:13:50.717 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.717 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:13:50.717 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:50.717 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:50.717 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:50.717 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:50.717 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:50.717 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:50.717 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:50.717 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:50.717 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:50.717 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:50.717 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:13:50.717 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:13:50.717 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:13:50.717 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:13:50.717 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:13:50.717 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:13:50.717 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:13:50.717 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:13:50.717 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:50.717 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:50.717 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:50.717 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:50.717 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:50.717 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:50.717 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:50.717 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:50.717 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:50.717 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:50.717 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:50.717 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:50.717 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:50.717 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:50.717 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:50.717 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:50.717 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:50.717 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:50.717 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:50.717 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:50.717 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:50.717 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:50.717 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:50.717 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:50.717 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:50.717 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:50.717 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:50.717 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:50.717 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:50.717 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:50.717 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:50.717 Cannot find device "nvmf_init_br" 00:13:50.717 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:13:50.717 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:50.717 Cannot find device "nvmf_init_br2" 00:13:50.717 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:13:50.717 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:50.717 Cannot find device "nvmf_tgt_br" 00:13:50.717 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:13:50.717 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:50.717 Cannot find device "nvmf_tgt_br2" 00:13:50.717 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:13:50.717 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:50.717 Cannot find device "nvmf_init_br" 00:13:50.717 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:13:50.717 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:50.717 Cannot find device "nvmf_init_br2" 00:13:50.717 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:13:50.717 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:50.717 Cannot find device "nvmf_tgt_br" 00:13:50.717 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:13:50.717 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:50.717 Cannot find device "nvmf_tgt_br2" 00:13:50.717 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:13:50.717 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:50.717 Cannot find device "nvmf_br" 00:13:50.717 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:13:50.717 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:50.717 Cannot find device "nvmf_init_if" 00:13:50.717 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:13:50.717 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:50.717 Cannot find device "nvmf_init_if2" 00:13:50.717 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:13:50.717 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:50.717 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:50.717 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:13:50.717 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:50.717 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:50.717 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:13:50.717 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:50.717 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:50.717 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:50.718 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:50.718 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:50.718 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:50.718 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:50.990 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:50.990 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:50.990 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:50.990 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:50.990 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:50.990 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:50.990 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:50.990 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:50.990 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:50.990 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:50.990 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:50.990 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:50.990 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:50.990 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:50.990 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:50.990 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:50.990 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:50.990 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:50.990 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:50.990 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:50.990 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:50.990 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:50.990 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:50.990 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:50.990 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:50.990 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:50.990 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:50.990 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.104 ms 00:13:50.990 00:13:50.990 --- 10.0.0.3 ping statistics --- 00:13:50.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:50.990 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:13:50.991 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:50.991 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:50.991 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.062 ms 00:13:50.991 00:13:50.991 --- 10.0.0.4 ping statistics --- 00:13:50.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:50.991 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:13:50.991 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:50.991 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:50.991 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:13:50.991 00:13:50.991 --- 10.0.0.1 ping statistics --- 00:13:50.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:50.991 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:13:50.991 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:50.991 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:50.991 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:13:50.991 00:13:50.991 --- 10.0.0.2 ping statistics --- 00:13:50.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:50.991 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:13:50.991 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:50.991 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@461 -- # return 0 00:13:50.991 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:50.991 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:50.991 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:50.991 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:50.991 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:50.991 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:50.991 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:50.991 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:13:50.991 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:50.991 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:50.991 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.991 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=76158 00:13:50.991 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 76158 00:13:50.991 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:13:50.991 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 76158 ']' 00:13:50.991 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:50.991 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:50.991 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:50.991 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:50.991 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.588 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:51.588 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:13:51.588 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:51.588 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:51.588 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.588 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:51.588 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=76184 00:13:51.588 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:13:51.588 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:13:51.588 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:13:51.588 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:51.588 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:51.588 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:51.588 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:13:51.588 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:13:51.588 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:51.588 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a58e6026f5a2c83b67ee52a8f54c4f2003d5df8fedfedef4 00:13:51.588 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:13:51.588 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.jcw 00:13:51.588 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a58e6026f5a2c83b67ee52a8f54c4f2003d5df8fedfedef4 0 00:13:51.588 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a58e6026f5a2c83b67ee52a8f54c4f2003d5df8fedfedef4 0 00:13:51.588 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:51.588 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:51.588 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a58e6026f5a2c83b67ee52a8f54c4f2003d5df8fedfedef4 00:13:51.588 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:13:51.588 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:51.588 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.jcw 00:13:51.588 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.jcw 00:13:51.588 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.jcw 00:13:51.588 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:13:51.588 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:51.588 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:51.588 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:51.588 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:13:51.588 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:13:51.589 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:13:51.589 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=9805bd21515163164de9afb7a2c2227dac38210bf1fb95addd91463e65e2c477 00:13:51.589 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:13:51.589 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.2fz 00:13:51.589 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 9805bd21515163164de9afb7a2c2227dac38210bf1fb95addd91463e65e2c477 3 00:13:51.589 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 9805bd21515163164de9afb7a2c2227dac38210bf1fb95addd91463e65e2c477 3 00:13:51.589 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:51.589 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:51.589 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=9805bd21515163164de9afb7a2c2227dac38210bf1fb95addd91463e65e2c477 00:13:51.589 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:13:51.589 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:51.589 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.2fz 00:13:51.589 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.2fz 00:13:51.589 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.2fz 00:13:51.589 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:13:51.589 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:51.589 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:51.589 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:51.589 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:13:51.589 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:13:51.589 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:13:51.589 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=97e9f2913a6198b219fc658f966bbfe8 00:13:51.589 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:13:51.589 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.zsS 00:13:51.589 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 97e9f2913a6198b219fc658f966bbfe8 1 00:13:51.589 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 97e9f2913a6198b219fc658f966bbfe8 1 00:13:51.589 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:51.589 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:51.589 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=97e9f2913a6198b219fc658f966bbfe8 00:13:51.589 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:13:51.589 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:51.589 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.zsS 00:13:51.589 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.zsS 00:13:51.589 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.zsS 00:13:51.589 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:13:51.589 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:51.589 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:51.589 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:51.589 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:13:51.589 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:13:51.589 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:51.589 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=e8e7c06c157d28ea56b56164fb018ec1df00557a7a140454 00:13:51.589 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:13:51.589 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Ggp 00:13:51.589 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key e8e7c06c157d28ea56b56164fb018ec1df00557a7a140454 2 00:13:51.589 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 e8e7c06c157d28ea56b56164fb018ec1df00557a7a140454 2 00:13:51.589 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:51.589 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:51.589 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=e8e7c06c157d28ea56b56164fb018ec1df00557a7a140454 00:13:51.589 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:13:51.589 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:51.589 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Ggp 00:13:51.589 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Ggp 00:13:51.589 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.Ggp 00:13:51.848 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:13:51.848 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:51.848 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:51.848 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:51.848 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:13:51.848 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:13:51.848 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:51.848 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=88d54f2a5fcdff4d11ce226e66ed28300709be12a9ac2ad3 00:13:51.848 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:13:51.848 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Zf4 00:13:51.848 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 88d54f2a5fcdff4d11ce226e66ed28300709be12a9ac2ad3 2 00:13:51.848 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 88d54f2a5fcdff4d11ce226e66ed28300709be12a9ac2ad3 2 00:13:51.848 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:51.848 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:51.848 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=88d54f2a5fcdff4d11ce226e66ed28300709be12a9ac2ad3 00:13:51.848 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:13:51.848 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:51.848 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Zf4 00:13:51.848 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Zf4 00:13:51.848 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.Zf4 00:13:51.848 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:13:51.848 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:51.848 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:51.848 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:51.848 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:13:51.848 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:13:51.848 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:13:51.848 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=fa1e4a0f378d1d690d2223179abf8d19 00:13:51.848 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:13:51.848 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Vdl 00:13:51.848 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key fa1e4a0f378d1d690d2223179abf8d19 1 00:13:51.848 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 fa1e4a0f378d1d690d2223179abf8d19 1 00:13:51.848 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:51.848 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:51.848 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=fa1e4a0f378d1d690d2223179abf8d19 00:13:51.848 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:13:51.848 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:51.848 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Vdl 00:13:51.849 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Vdl 00:13:51.849 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.Vdl 00:13:51.849 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:13:51.849 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:51.849 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:51.849 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:51.849 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:13:51.849 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:13:51.849 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:13:51.849 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=851a864c00a7ec482ebfd35d4908afeeb77092e3ae9352dc36c1de140c6c580f 00:13:51.849 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:13:51.849 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.SGZ 00:13:51.849 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 851a864c00a7ec482ebfd35d4908afeeb77092e3ae9352dc36c1de140c6c580f 3 00:13:51.849 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 851a864c00a7ec482ebfd35d4908afeeb77092e3ae9352dc36c1de140c6c580f 3 00:13:51.849 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:51.849 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:51.849 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=851a864c00a7ec482ebfd35d4908afeeb77092e3ae9352dc36c1de140c6c580f 00:13:51.849 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:13:51.849 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:51.849 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.SGZ 00:13:51.849 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.SGZ 00:13:51.849 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.SGZ 00:13:51.849 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:13:51.849 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 76158 00:13:51.849 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 76158 ']' 00:13:51.849 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:51.849 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:51.849 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:51.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:51.849 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:51.849 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.106 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:52.106 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:13:52.107 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 76184 /var/tmp/host.sock 00:13:52.107 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 76184 ']' 00:13:52.107 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:13:52.107 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:52.107 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:52.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:52.107 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:52.107 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.674 11:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:52.674 11:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:13:52.674 11:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:13:52.674 11:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.674 11:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.674 11:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.674 11:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:13:52.674 11:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.jcw 00:13:52.674 11:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.674 11:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.674 11:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.674 11:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.jcw 00:13:52.674 11:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.jcw 00:13:52.931 11:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.2fz ]] 00:13:52.931 11:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.2fz 00:13:52.931 11:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.931 11:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.931 11:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.931 11:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.2fz 00:13:52.931 11:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.2fz 00:13:53.188 11:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:13:53.188 11:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.zsS 00:13:53.188 11:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.188 11:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.188 11:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.188 11:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.zsS 00:13:53.188 11:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.zsS 00:13:53.445 11:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.Ggp ]] 00:13:53.445 11:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Ggp 00:13:53.446 11:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.446 11:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.446 11:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.446 11:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Ggp 00:13:53.446 11:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Ggp 00:13:54.021 11:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:13:54.021 11:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Zf4 00:13:54.021 11:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.021 11:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.021 11:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.021 11:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.Zf4 00:13:54.021 11:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.Zf4 00:13:54.278 11:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.Vdl ]] 00:13:54.279 11:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Vdl 00:13:54.279 11:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.279 11:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.279 11:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.279 11:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Vdl 00:13:54.279 11:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Vdl 00:13:54.537 11:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:13:54.537 11:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.SGZ 00:13:54.537 11:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.537 11:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.537 11:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.537 11:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.SGZ 00:13:54.537 11:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.SGZ 00:13:54.795 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:13:54.795 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:13:54.795 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:54.795 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:54.795 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:54.795 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:55.053 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:13:55.053 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:55.053 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:55.053 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:55.053 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:55.053 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:55.053 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:55.053 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.053 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.311 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.311 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:55.311 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:55.311 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:55.570 00:13:55.570 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:55.570 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:55.570 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:55.828 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:55.828 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:55.828 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.828 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.828 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.828 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:55.828 { 00:13:55.828 "auth": { 00:13:55.828 "dhgroup": "null", 00:13:55.828 "digest": "sha256", 00:13:55.828 "state": "completed" 00:13:55.828 }, 00:13:55.828 "cntlid": 1, 00:13:55.828 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a", 00:13:55.828 "listen_address": { 00:13:55.828 "adrfam": "IPv4", 00:13:55.828 "traddr": "10.0.0.3", 00:13:55.828 "trsvcid": "4420", 00:13:55.828 "trtype": "TCP" 00:13:55.828 }, 00:13:55.828 "peer_address": { 00:13:55.828 "adrfam": "IPv4", 00:13:55.828 "traddr": "10.0.0.1", 00:13:55.828 "trsvcid": "58014", 00:13:55.828 "trtype": "TCP" 00:13:55.828 }, 00:13:55.828 "qid": 0, 00:13:55.828 "state": "enabled", 00:13:55.828 "thread": "nvmf_tgt_poll_group_000" 00:13:55.828 } 00:13:55.828 ]' 00:13:56.087 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:56.087 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:56.087 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:56.087 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:56.087 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:56.087 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:56.087 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:56.087 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:56.345 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTU4ZTYwMjZmNWEyYzgzYjY3ZWU1MmE4ZjU0YzRmMjAwM2Q1ZGY4ZmVkZmVkZWY0w3W3ew==: --dhchap-ctrl-secret DHHC-1:03:OTgwNWJkMjE1MTUxNjMxNjRkZTlhZmI3YTJjMjIyN2RhYzM4MjEwYmYxZmI5NWFkZGQ5MTQ2M2U2NWUyYzQ3N6HE8a4=: 00:13:56.345 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid 806d442c-08ba-4719-b76d-33169283459a -l 0 --dhchap-secret DHHC-1:00:YTU4ZTYwMjZmNWEyYzgzYjY3ZWU1MmE4ZjU0YzRmMjAwM2Q1ZGY4ZmVkZmVkZWY0w3W3ew==: --dhchap-ctrl-secret DHHC-1:03:OTgwNWJkMjE1MTUxNjMxNjRkZTlhZmI3YTJjMjIyN2RhYzM4MjEwYmYxZmI5NWFkZGQ5MTQ2M2U2NWUyYzQ3N6HE8a4=: 00:14:01.611 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:01.611 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:01.611 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:14:01.611 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.611 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.611 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.611 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:01.611 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:01.611 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:01.611 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:14:01.611 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:01.611 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:01.611 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:01.611 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:01.611 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:01.611 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:01.611 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.611 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.611 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.611 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:01.611 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:01.611 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:01.611 00:14:01.611 11:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:01.611 11:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:01.611 11:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:01.869 11:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:01.869 11:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:01.869 11:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.869 11:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.869 11:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.869 11:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:01.869 { 00:14:01.869 "auth": { 00:14:01.869 "dhgroup": "null", 00:14:01.869 "digest": "sha256", 00:14:01.869 "state": "completed" 00:14:01.869 }, 00:14:01.869 "cntlid": 3, 00:14:01.869 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a", 00:14:01.869 "listen_address": { 00:14:01.869 "adrfam": "IPv4", 00:14:01.869 "traddr": "10.0.0.3", 00:14:01.869 "trsvcid": "4420", 00:14:01.869 "trtype": "TCP" 00:14:01.869 }, 00:14:01.869 "peer_address": { 00:14:01.869 "adrfam": "IPv4", 00:14:01.869 "traddr": "10.0.0.1", 00:14:01.869 "trsvcid": "58032", 00:14:01.869 "trtype": "TCP" 00:14:01.869 }, 00:14:01.869 "qid": 0, 00:14:01.869 "state": "enabled", 00:14:01.869 "thread": "nvmf_tgt_poll_group_000" 00:14:01.869 } 00:14:01.869 ]' 00:14:01.869 11:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:01.869 11:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:01.869 11:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:02.126 11:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:02.127 11:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:02.127 11:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:02.127 11:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:02.127 11:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:02.385 11:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTdlOWYyOTEzYTYxOThiMjE5ZmM2NThmOTY2YmJmZTg53tHC: --dhchap-ctrl-secret DHHC-1:02:ZThlN2MwNmMxNTdkMjhlYTU2YjU2MTY0ZmIwMThlYzFkZjAwNTU3YTdhMTQwNDU0RjLgqQ==: 00:14:02.385 11:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid 806d442c-08ba-4719-b76d-33169283459a -l 0 --dhchap-secret DHHC-1:01:OTdlOWYyOTEzYTYxOThiMjE5ZmM2NThmOTY2YmJmZTg53tHC: --dhchap-ctrl-secret DHHC-1:02:ZThlN2MwNmMxNTdkMjhlYTU2YjU2MTY0ZmIwMThlYzFkZjAwNTU3YTdhMTQwNDU0RjLgqQ==: 00:14:02.993 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:02.994 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:02.994 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:14:02.994 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.994 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.251 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.251 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:03.251 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:03.251 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:03.510 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:14:03.510 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:03.510 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:03.510 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:03.510 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:03.510 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:03.510 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:03.510 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.510 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.510 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.510 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:03.510 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:03.510 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:03.770 00:14:03.770 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:03.770 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:03.770 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:04.034 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:04.035 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:04.035 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.035 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.035 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.035 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:04.035 { 00:14:04.035 "auth": { 00:14:04.035 "dhgroup": "null", 00:14:04.035 "digest": "sha256", 00:14:04.035 "state": "completed" 00:14:04.035 }, 00:14:04.035 "cntlid": 5, 00:14:04.035 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a", 00:14:04.035 "listen_address": { 00:14:04.035 "adrfam": "IPv4", 00:14:04.035 "traddr": "10.0.0.3", 00:14:04.035 "trsvcid": "4420", 00:14:04.035 "trtype": "TCP" 00:14:04.035 }, 00:14:04.035 "peer_address": { 00:14:04.035 "adrfam": "IPv4", 00:14:04.035 "traddr": "10.0.0.1", 00:14:04.035 "trsvcid": "58042", 00:14:04.035 "trtype": "TCP" 00:14:04.035 }, 00:14:04.035 "qid": 0, 00:14:04.035 "state": "enabled", 00:14:04.035 "thread": "nvmf_tgt_poll_group_000" 00:14:04.035 } 00:14:04.035 ]' 00:14:04.035 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:04.293 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:04.293 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:04.293 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:04.293 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:04.293 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:04.293 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:04.293 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:04.551 11:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODhkNTRmMmE1ZmNkZmY0ZDExY2UyMjZlNjZlZDI4MzAwNzA5YmUxMmE5YWMyYWQzAlDRWQ==: --dhchap-ctrl-secret DHHC-1:01:ZmExZTRhMGYzNzhkMWQ2OTBkMjIyMzE3OWFiZjhkMTkzDxnU: 00:14:04.551 11:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid 806d442c-08ba-4719-b76d-33169283459a -l 0 --dhchap-secret DHHC-1:02:ODhkNTRmMmE1ZmNkZmY0ZDExY2UyMjZlNjZlZDI4MzAwNzA5YmUxMmE5YWMyYWQzAlDRWQ==: --dhchap-ctrl-secret DHHC-1:01:ZmExZTRhMGYzNzhkMWQ2OTBkMjIyMzE3OWFiZjhkMTkzDxnU: 00:14:05.486 11:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:05.486 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:05.486 11:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:14:05.486 11:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.487 11:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.487 11:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.487 11:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:05.487 11:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:05.487 11:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:05.487 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:14:05.487 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:05.487 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:05.487 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:05.487 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:05.487 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:05.487 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key3 00:14:05.487 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.487 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.487 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.487 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:05.487 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:05.487 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:06.054 00:14:06.054 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:06.054 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:06.054 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:06.313 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:06.313 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:06.313 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.313 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.313 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.313 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:06.313 { 00:14:06.313 "auth": { 00:14:06.313 "dhgroup": "null", 00:14:06.313 "digest": "sha256", 00:14:06.313 "state": "completed" 00:14:06.313 }, 00:14:06.313 "cntlid": 7, 00:14:06.313 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a", 00:14:06.313 "listen_address": { 00:14:06.313 "adrfam": "IPv4", 00:14:06.313 "traddr": "10.0.0.3", 00:14:06.313 "trsvcid": "4420", 00:14:06.313 "trtype": "TCP" 00:14:06.313 }, 00:14:06.313 "peer_address": { 00:14:06.313 "adrfam": "IPv4", 00:14:06.313 "traddr": "10.0.0.1", 00:14:06.313 "trsvcid": "43334", 00:14:06.313 "trtype": "TCP" 00:14:06.313 }, 00:14:06.313 "qid": 0, 00:14:06.313 "state": "enabled", 00:14:06.313 "thread": "nvmf_tgt_poll_group_000" 00:14:06.313 } 00:14:06.313 ]' 00:14:06.313 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:06.313 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:06.313 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:06.313 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:06.313 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:06.571 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:06.572 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:06.572 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:06.829 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODUxYTg2NGMwMGE3ZWM0ODJlYmZkMzVkNDkwOGFmZWViNzcwOTJlM2FlOTM1MmRjMzZjMWRlMTQwYzZjNTgwZnmdTmU=: 00:14:06.829 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid 806d442c-08ba-4719-b76d-33169283459a -l 0 --dhchap-secret DHHC-1:03:ODUxYTg2NGMwMGE3ZWM0ODJlYmZkMzVkNDkwOGFmZWViNzcwOTJlM2FlOTM1MmRjMzZjMWRlMTQwYzZjNTgwZnmdTmU=: 00:14:07.395 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:07.395 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:07.395 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:14:07.395 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.395 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.395 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.395 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:07.395 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:07.395 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:07.395 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:07.654 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:14:07.654 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:07.654 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:07.654 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:07.654 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:07.654 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:07.654 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:07.654 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.654 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.914 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.914 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:07.914 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:07.914 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:08.172 00:14:08.172 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:08.172 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:08.172 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:08.431 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:08.431 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:08.431 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.431 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.431 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.431 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:08.431 { 00:14:08.431 "auth": { 00:14:08.431 "dhgroup": "ffdhe2048", 00:14:08.431 "digest": "sha256", 00:14:08.431 "state": "completed" 00:14:08.431 }, 00:14:08.431 "cntlid": 9, 00:14:08.431 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a", 00:14:08.431 "listen_address": { 00:14:08.431 "adrfam": "IPv4", 00:14:08.431 "traddr": "10.0.0.3", 00:14:08.431 "trsvcid": "4420", 00:14:08.431 "trtype": "TCP" 00:14:08.431 }, 00:14:08.431 "peer_address": { 00:14:08.431 "adrfam": "IPv4", 00:14:08.431 "traddr": "10.0.0.1", 00:14:08.431 "trsvcid": "43362", 00:14:08.431 "trtype": "TCP" 00:14:08.431 }, 00:14:08.431 "qid": 0, 00:14:08.431 "state": "enabled", 00:14:08.431 "thread": "nvmf_tgt_poll_group_000" 00:14:08.431 } 00:14:08.431 ]' 00:14:08.431 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:08.431 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:08.431 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:08.689 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:08.689 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:08.689 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:08.689 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:08.689 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:08.948 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTU4ZTYwMjZmNWEyYzgzYjY3ZWU1MmE4ZjU0YzRmMjAwM2Q1ZGY4ZmVkZmVkZWY0w3W3ew==: --dhchap-ctrl-secret DHHC-1:03:OTgwNWJkMjE1MTUxNjMxNjRkZTlhZmI3YTJjMjIyN2RhYzM4MjEwYmYxZmI5NWFkZGQ5MTQ2M2U2NWUyYzQ3N6HE8a4=: 00:14:08.948 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid 806d442c-08ba-4719-b76d-33169283459a -l 0 --dhchap-secret DHHC-1:00:YTU4ZTYwMjZmNWEyYzgzYjY3ZWU1MmE4ZjU0YzRmMjAwM2Q1ZGY4ZmVkZmVkZWY0w3W3ew==: --dhchap-ctrl-secret DHHC-1:03:OTgwNWJkMjE1MTUxNjMxNjRkZTlhZmI3YTJjMjIyN2RhYzM4MjEwYmYxZmI5NWFkZGQ5MTQ2M2U2NWUyYzQ3N6HE8a4=: 00:14:09.514 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:09.514 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:09.514 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:14:09.514 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.514 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.514 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.514 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:09.514 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:09.514 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:09.774 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:14:09.774 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:09.774 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:09.774 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:09.774 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:09.774 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:09.774 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:09.774 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.774 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.774 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.774 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:09.774 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:09.774 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:10.341 00:14:10.341 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:10.341 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:10.341 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:10.600 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:10.600 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:10.600 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.600 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.600 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.600 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:10.600 { 00:14:10.600 "auth": { 00:14:10.600 "dhgroup": "ffdhe2048", 00:14:10.600 "digest": "sha256", 00:14:10.600 "state": "completed" 00:14:10.600 }, 00:14:10.600 "cntlid": 11, 00:14:10.600 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a", 00:14:10.600 "listen_address": { 00:14:10.600 "adrfam": "IPv4", 00:14:10.600 "traddr": "10.0.0.3", 00:14:10.600 "trsvcid": "4420", 00:14:10.600 "trtype": "TCP" 00:14:10.600 }, 00:14:10.600 "peer_address": { 00:14:10.600 "adrfam": "IPv4", 00:14:10.600 "traddr": "10.0.0.1", 00:14:10.600 "trsvcid": "43402", 00:14:10.600 "trtype": "TCP" 00:14:10.600 }, 00:14:10.600 "qid": 0, 00:14:10.600 "state": "enabled", 00:14:10.600 "thread": "nvmf_tgt_poll_group_000" 00:14:10.600 } 00:14:10.600 ]' 00:14:10.600 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:10.600 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:10.600 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:10.600 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:10.600 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:10.859 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:10.859 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:10.859 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:11.117 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTdlOWYyOTEzYTYxOThiMjE5ZmM2NThmOTY2YmJmZTg53tHC: --dhchap-ctrl-secret DHHC-1:02:ZThlN2MwNmMxNTdkMjhlYTU2YjU2MTY0ZmIwMThlYzFkZjAwNTU3YTdhMTQwNDU0RjLgqQ==: 00:14:11.118 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid 806d442c-08ba-4719-b76d-33169283459a -l 0 --dhchap-secret DHHC-1:01:OTdlOWYyOTEzYTYxOThiMjE5ZmM2NThmOTY2YmJmZTg53tHC: --dhchap-ctrl-secret DHHC-1:02:ZThlN2MwNmMxNTdkMjhlYTU2YjU2MTY0ZmIwMThlYzFkZjAwNTU3YTdhMTQwNDU0RjLgqQ==: 00:14:11.683 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:11.683 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:11.683 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:14:11.683 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.683 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.683 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.683 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:11.683 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:11.683 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:12.250 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:14:12.251 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:12.251 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:12.251 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:12.251 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:12.251 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:12.251 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:12.251 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.251 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.251 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.251 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:12.251 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:12.251 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:12.510 00:14:12.510 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:12.510 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:12.510 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:12.769 11:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:12.769 11:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:12.769 11:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.769 11:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.769 11:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.769 11:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:12.769 { 00:14:12.769 "auth": { 00:14:12.769 "dhgroup": "ffdhe2048", 00:14:12.769 "digest": "sha256", 00:14:12.769 "state": "completed" 00:14:12.769 }, 00:14:12.769 "cntlid": 13, 00:14:12.769 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a", 00:14:12.769 "listen_address": { 00:14:12.769 "adrfam": "IPv4", 00:14:12.769 "traddr": "10.0.0.3", 00:14:12.769 "trsvcid": "4420", 00:14:12.769 "trtype": "TCP" 00:14:12.769 }, 00:14:12.769 "peer_address": { 00:14:12.769 "adrfam": "IPv4", 00:14:12.769 "traddr": "10.0.0.1", 00:14:12.769 "trsvcid": "43422", 00:14:12.769 "trtype": "TCP" 00:14:12.769 }, 00:14:12.769 "qid": 0, 00:14:12.769 "state": "enabled", 00:14:12.769 "thread": "nvmf_tgt_poll_group_000" 00:14:12.769 } 00:14:12.769 ]' 00:14:12.769 11:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:12.769 11:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:12.769 11:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:13.028 11:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:13.028 11:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:13.028 11:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:13.028 11:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:13.028 11:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:13.287 11:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODhkNTRmMmE1ZmNkZmY0ZDExY2UyMjZlNjZlZDI4MzAwNzA5YmUxMmE5YWMyYWQzAlDRWQ==: --dhchap-ctrl-secret DHHC-1:01:ZmExZTRhMGYzNzhkMWQ2OTBkMjIyMzE3OWFiZjhkMTkzDxnU: 00:14:13.287 11:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid 806d442c-08ba-4719-b76d-33169283459a -l 0 --dhchap-secret DHHC-1:02:ODhkNTRmMmE1ZmNkZmY0ZDExY2UyMjZlNjZlZDI4MzAwNzA5YmUxMmE5YWMyYWQzAlDRWQ==: --dhchap-ctrl-secret DHHC-1:01:ZmExZTRhMGYzNzhkMWQ2OTBkMjIyMzE3OWFiZjhkMTkzDxnU: 00:14:13.852 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:13.852 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:13.852 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:14:13.852 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.852 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.852 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.852 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:13.852 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:13.852 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:14.419 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:14:14.419 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:14.419 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:14.419 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:14.419 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:14.419 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:14.419 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key3 00:14:14.419 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.419 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.419 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.419 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:14.419 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:14.419 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:14.714 00:14:14.714 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:14.714 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:14.714 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:14.972 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:14.972 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:14.972 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.972 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.972 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.972 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:14.972 { 00:14:14.972 "auth": { 00:14:14.972 "dhgroup": "ffdhe2048", 00:14:14.972 "digest": "sha256", 00:14:14.972 "state": "completed" 00:14:14.972 }, 00:14:14.972 "cntlid": 15, 00:14:14.972 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a", 00:14:14.972 "listen_address": { 00:14:14.972 "adrfam": "IPv4", 00:14:14.972 "traddr": "10.0.0.3", 00:14:14.972 "trsvcid": "4420", 00:14:14.972 "trtype": "TCP" 00:14:14.972 }, 00:14:14.972 "peer_address": { 00:14:14.972 "adrfam": "IPv4", 00:14:14.972 "traddr": "10.0.0.1", 00:14:14.972 "trsvcid": "43440", 00:14:14.972 "trtype": "TCP" 00:14:14.972 }, 00:14:14.972 "qid": 0, 00:14:14.972 "state": "enabled", 00:14:14.972 "thread": "nvmf_tgt_poll_group_000" 00:14:14.972 } 00:14:14.972 ]' 00:14:14.972 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:14.972 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:14.972 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:14.972 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:14.972 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:14.972 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:14.972 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:14.972 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:15.230 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODUxYTg2NGMwMGE3ZWM0ODJlYmZkMzVkNDkwOGFmZWViNzcwOTJlM2FlOTM1MmRjMzZjMWRlMTQwYzZjNTgwZnmdTmU=: 00:14:15.230 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid 806d442c-08ba-4719-b76d-33169283459a -l 0 --dhchap-secret DHHC-1:03:ODUxYTg2NGMwMGE3ZWM0ODJlYmZkMzVkNDkwOGFmZWViNzcwOTJlM2FlOTM1MmRjMzZjMWRlMTQwYzZjNTgwZnmdTmU=: 00:14:16.165 11:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:16.165 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:16.165 11:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:14:16.165 11:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.165 11:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.165 11:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.165 11:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:16.165 11:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:16.165 11:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:16.165 11:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:16.424 11:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:14:16.424 11:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:16.424 11:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:16.424 11:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:16.424 11:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:16.424 11:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:16.424 11:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:16.424 11:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.424 11:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.424 11:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.424 11:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:16.424 11:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:16.424 11:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:16.992 00:14:16.992 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:16.992 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:16.992 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:17.251 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:17.251 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:17.251 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.251 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.251 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.251 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:17.251 { 00:14:17.251 "auth": { 00:14:17.251 "dhgroup": "ffdhe3072", 00:14:17.251 "digest": "sha256", 00:14:17.251 "state": "completed" 00:14:17.251 }, 00:14:17.251 "cntlid": 17, 00:14:17.251 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a", 00:14:17.251 "listen_address": { 00:14:17.251 "adrfam": "IPv4", 00:14:17.251 "traddr": "10.0.0.3", 00:14:17.251 "trsvcid": "4420", 00:14:17.251 "trtype": "TCP" 00:14:17.251 }, 00:14:17.251 "peer_address": { 00:14:17.251 "adrfam": "IPv4", 00:14:17.251 "traddr": "10.0.0.1", 00:14:17.251 "trsvcid": "52012", 00:14:17.251 "trtype": "TCP" 00:14:17.251 }, 00:14:17.251 "qid": 0, 00:14:17.251 "state": "enabled", 00:14:17.251 "thread": "nvmf_tgt_poll_group_000" 00:14:17.251 } 00:14:17.251 ]' 00:14:17.251 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:17.251 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:17.251 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:17.251 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:17.251 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:17.509 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:17.509 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:17.509 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:17.767 11:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTU4ZTYwMjZmNWEyYzgzYjY3ZWU1MmE4ZjU0YzRmMjAwM2Q1ZGY4ZmVkZmVkZWY0w3W3ew==: --dhchap-ctrl-secret DHHC-1:03:OTgwNWJkMjE1MTUxNjMxNjRkZTlhZmI3YTJjMjIyN2RhYzM4MjEwYmYxZmI5NWFkZGQ5MTQ2M2U2NWUyYzQ3N6HE8a4=: 00:14:17.767 11:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid 806d442c-08ba-4719-b76d-33169283459a -l 0 --dhchap-secret DHHC-1:00:YTU4ZTYwMjZmNWEyYzgzYjY3ZWU1MmE4ZjU0YzRmMjAwM2Q1ZGY4ZmVkZmVkZWY0w3W3ew==: --dhchap-ctrl-secret DHHC-1:03:OTgwNWJkMjE1MTUxNjMxNjRkZTlhZmI3YTJjMjIyN2RhYzM4MjEwYmYxZmI5NWFkZGQ5MTQ2M2U2NWUyYzQ3N6HE8a4=: 00:14:18.336 11:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:18.336 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:18.336 11:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:14:18.336 11:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.336 11:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.336 11:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.336 11:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:18.336 11:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:18.336 11:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:18.904 11:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:14:18.904 11:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:18.904 11:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:18.904 11:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:18.904 11:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:18.904 11:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:18.904 11:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:18.904 11:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.904 11:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.904 11:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.904 11:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:18.904 11:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:18.905 11:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:19.163 00:14:19.163 11:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:19.163 11:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:19.163 11:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:19.438 11:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:19.438 11:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:19.438 11:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.438 11:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.438 11:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.438 11:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:19.438 { 00:14:19.438 "auth": { 00:14:19.438 "dhgroup": "ffdhe3072", 00:14:19.438 "digest": "sha256", 00:14:19.438 "state": "completed" 00:14:19.438 }, 00:14:19.438 "cntlid": 19, 00:14:19.438 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a", 00:14:19.438 "listen_address": { 00:14:19.438 "adrfam": "IPv4", 00:14:19.438 "traddr": "10.0.0.3", 00:14:19.438 "trsvcid": "4420", 00:14:19.438 "trtype": "TCP" 00:14:19.438 }, 00:14:19.438 "peer_address": { 00:14:19.438 "adrfam": "IPv4", 00:14:19.438 "traddr": "10.0.0.1", 00:14:19.438 "trsvcid": "52046", 00:14:19.438 "trtype": "TCP" 00:14:19.438 }, 00:14:19.438 "qid": 0, 00:14:19.438 "state": "enabled", 00:14:19.438 "thread": "nvmf_tgt_poll_group_000" 00:14:19.438 } 00:14:19.438 ]' 00:14:19.438 11:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:19.438 11:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:19.438 11:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:19.702 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:19.702 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:19.702 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:19.702 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:19.702 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:19.959 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTdlOWYyOTEzYTYxOThiMjE5ZmM2NThmOTY2YmJmZTg53tHC: --dhchap-ctrl-secret DHHC-1:02:ZThlN2MwNmMxNTdkMjhlYTU2YjU2MTY0ZmIwMThlYzFkZjAwNTU3YTdhMTQwNDU0RjLgqQ==: 00:14:19.959 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid 806d442c-08ba-4719-b76d-33169283459a -l 0 --dhchap-secret DHHC-1:01:OTdlOWYyOTEzYTYxOThiMjE5ZmM2NThmOTY2YmJmZTg53tHC: --dhchap-ctrl-secret DHHC-1:02:ZThlN2MwNmMxNTdkMjhlYTU2YjU2MTY0ZmIwMThlYzFkZjAwNTU3YTdhMTQwNDU0RjLgqQ==: 00:14:20.893 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:20.893 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:20.893 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:14:20.893 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.893 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.893 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.893 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:20.893 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:20.893 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:20.893 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:14:20.893 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:20.893 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:20.893 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:20.893 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:20.893 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:20.893 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:20.893 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.893 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.893 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.893 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:20.893 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:20.893 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:21.460 00:14:21.460 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:21.460 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:21.460 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:22.038 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:22.038 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:22.038 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.038 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.038 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.038 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:22.038 { 00:14:22.038 "auth": { 00:14:22.038 "dhgroup": "ffdhe3072", 00:14:22.038 "digest": "sha256", 00:14:22.038 "state": "completed" 00:14:22.038 }, 00:14:22.038 "cntlid": 21, 00:14:22.038 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a", 00:14:22.038 "listen_address": { 00:14:22.038 "adrfam": "IPv4", 00:14:22.038 "traddr": "10.0.0.3", 00:14:22.038 "trsvcid": "4420", 00:14:22.038 "trtype": "TCP" 00:14:22.038 }, 00:14:22.038 "peer_address": { 00:14:22.038 "adrfam": "IPv4", 00:14:22.038 "traddr": "10.0.0.1", 00:14:22.038 "trsvcid": "52068", 00:14:22.038 "trtype": "TCP" 00:14:22.038 }, 00:14:22.038 "qid": 0, 00:14:22.038 "state": "enabled", 00:14:22.038 "thread": "nvmf_tgt_poll_group_000" 00:14:22.038 } 00:14:22.038 ]' 00:14:22.038 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:22.038 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:22.038 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:22.038 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:22.038 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:22.038 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:22.038 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:22.038 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:22.298 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODhkNTRmMmE1ZmNkZmY0ZDExY2UyMjZlNjZlZDI4MzAwNzA5YmUxMmE5YWMyYWQzAlDRWQ==: --dhchap-ctrl-secret DHHC-1:01:ZmExZTRhMGYzNzhkMWQ2OTBkMjIyMzE3OWFiZjhkMTkzDxnU: 00:14:22.299 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid 806d442c-08ba-4719-b76d-33169283459a -l 0 --dhchap-secret DHHC-1:02:ODhkNTRmMmE1ZmNkZmY0ZDExY2UyMjZlNjZlZDI4MzAwNzA5YmUxMmE5YWMyYWQzAlDRWQ==: --dhchap-ctrl-secret DHHC-1:01:ZmExZTRhMGYzNzhkMWQ2OTBkMjIyMzE3OWFiZjhkMTkzDxnU: 00:14:23.233 11:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:23.233 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:23.233 11:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:14:23.233 11:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.233 11:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.233 11:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.233 11:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:23.233 11:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:23.233 11:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:23.492 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:14:23.492 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:23.492 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:23.492 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:23.492 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:23.492 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:23.492 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key3 00:14:23.492 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.492 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.492 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.492 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:23.492 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:23.492 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:24.060 00:14:24.060 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:24.060 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:24.060 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:24.318 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:24.318 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:24.318 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.318 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.318 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.318 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:24.318 { 00:14:24.318 "auth": { 00:14:24.318 "dhgroup": "ffdhe3072", 00:14:24.318 "digest": "sha256", 00:14:24.318 "state": "completed" 00:14:24.318 }, 00:14:24.318 "cntlid": 23, 00:14:24.318 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a", 00:14:24.318 "listen_address": { 00:14:24.318 "adrfam": "IPv4", 00:14:24.318 "traddr": "10.0.0.3", 00:14:24.318 "trsvcid": "4420", 00:14:24.318 "trtype": "TCP" 00:14:24.318 }, 00:14:24.318 "peer_address": { 00:14:24.318 "adrfam": "IPv4", 00:14:24.318 "traddr": "10.0.0.1", 00:14:24.318 "trsvcid": "52096", 00:14:24.318 "trtype": "TCP" 00:14:24.318 }, 00:14:24.318 "qid": 0, 00:14:24.318 "state": "enabled", 00:14:24.318 "thread": "nvmf_tgt_poll_group_000" 00:14:24.318 } 00:14:24.318 ]' 00:14:24.318 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:24.318 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:24.318 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:24.318 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:24.318 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:24.318 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:24.318 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:24.318 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:24.885 11:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODUxYTg2NGMwMGE3ZWM0ODJlYmZkMzVkNDkwOGFmZWViNzcwOTJlM2FlOTM1MmRjMzZjMWRlMTQwYzZjNTgwZnmdTmU=: 00:14:24.885 11:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid 806d442c-08ba-4719-b76d-33169283459a -l 0 --dhchap-secret DHHC-1:03:ODUxYTg2NGMwMGE3ZWM0ODJlYmZkMzVkNDkwOGFmZWViNzcwOTJlM2FlOTM1MmRjMzZjMWRlMTQwYzZjNTgwZnmdTmU=: 00:14:25.453 11:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:25.453 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:25.453 11:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:14:25.453 11:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.453 11:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.453 11:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.453 11:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:25.453 11:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:25.453 11:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:25.453 11:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:25.740 11:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:14:25.740 11:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:25.740 11:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:25.740 11:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:25.740 11:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:25.740 11:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:25.740 11:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:25.740 11:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.740 11:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.740 11:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.740 11:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:25.741 11:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:25.741 11:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:26.308 00:14:26.308 11:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:26.308 11:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:26.308 11:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:26.567 11:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:26.567 11:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:26.567 11:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.567 11:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.567 11:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.567 11:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:26.567 { 00:14:26.567 "auth": { 00:14:26.567 "dhgroup": "ffdhe4096", 00:14:26.567 "digest": "sha256", 00:14:26.567 "state": "completed" 00:14:26.567 }, 00:14:26.567 "cntlid": 25, 00:14:26.567 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a", 00:14:26.567 "listen_address": { 00:14:26.567 "adrfam": "IPv4", 00:14:26.567 "traddr": "10.0.0.3", 00:14:26.567 "trsvcid": "4420", 00:14:26.567 "trtype": "TCP" 00:14:26.567 }, 00:14:26.567 "peer_address": { 00:14:26.567 "adrfam": "IPv4", 00:14:26.567 "traddr": "10.0.0.1", 00:14:26.567 "trsvcid": "59450", 00:14:26.567 "trtype": "TCP" 00:14:26.567 }, 00:14:26.567 "qid": 0, 00:14:26.567 "state": "enabled", 00:14:26.567 "thread": "nvmf_tgt_poll_group_000" 00:14:26.567 } 00:14:26.567 ]' 00:14:26.567 11:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:26.567 11:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:26.567 11:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:26.567 11:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:26.567 11:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:26.826 11:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:26.826 11:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:26.826 11:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:27.084 11:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTU4ZTYwMjZmNWEyYzgzYjY3ZWU1MmE4ZjU0YzRmMjAwM2Q1ZGY4ZmVkZmVkZWY0w3W3ew==: --dhchap-ctrl-secret DHHC-1:03:OTgwNWJkMjE1MTUxNjMxNjRkZTlhZmI3YTJjMjIyN2RhYzM4MjEwYmYxZmI5NWFkZGQ5MTQ2M2U2NWUyYzQ3N6HE8a4=: 00:14:27.084 11:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid 806d442c-08ba-4719-b76d-33169283459a -l 0 --dhchap-secret DHHC-1:00:YTU4ZTYwMjZmNWEyYzgzYjY3ZWU1MmE4ZjU0YzRmMjAwM2Q1ZGY4ZmVkZmVkZWY0w3W3ew==: --dhchap-ctrl-secret DHHC-1:03:OTgwNWJkMjE1MTUxNjMxNjRkZTlhZmI3YTJjMjIyN2RhYzM4MjEwYmYxZmI5NWFkZGQ5MTQ2M2U2NWUyYzQ3N6HE8a4=: 00:14:27.652 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:27.652 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:27.652 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:14:27.652 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.652 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.652 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.652 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:27.652 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:27.652 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:28.227 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:14:28.227 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:28.227 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:28.227 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:28.227 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:28.227 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:28.227 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:28.227 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.227 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.227 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.227 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:28.227 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:28.227 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:28.494 00:14:28.494 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:28.494 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:28.494 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:28.753 11:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:28.753 11:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:28.753 11:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.753 11:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.753 11:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.753 11:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:28.753 { 00:14:28.753 "auth": { 00:14:28.753 "dhgroup": "ffdhe4096", 00:14:28.753 "digest": "sha256", 00:14:28.753 "state": "completed" 00:14:28.753 }, 00:14:28.753 "cntlid": 27, 00:14:28.753 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a", 00:14:28.753 "listen_address": { 00:14:28.753 "adrfam": "IPv4", 00:14:28.753 "traddr": "10.0.0.3", 00:14:28.753 "trsvcid": "4420", 00:14:28.753 "trtype": "TCP" 00:14:28.753 }, 00:14:28.753 "peer_address": { 00:14:28.753 "adrfam": "IPv4", 00:14:28.753 "traddr": "10.0.0.1", 00:14:28.753 "trsvcid": "59478", 00:14:28.753 "trtype": "TCP" 00:14:28.753 }, 00:14:28.753 "qid": 0, 00:14:28.753 "state": "enabled", 00:14:28.753 "thread": "nvmf_tgt_poll_group_000" 00:14:28.753 } 00:14:28.753 ]' 00:14:28.753 11:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:28.753 11:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:28.753 11:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:29.012 11:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:29.012 11:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:29.012 11:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:29.012 11:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:29.012 11:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:29.271 11:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTdlOWYyOTEzYTYxOThiMjE5ZmM2NThmOTY2YmJmZTg53tHC: --dhchap-ctrl-secret DHHC-1:02:ZThlN2MwNmMxNTdkMjhlYTU2YjU2MTY0ZmIwMThlYzFkZjAwNTU3YTdhMTQwNDU0RjLgqQ==: 00:14:29.271 11:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid 806d442c-08ba-4719-b76d-33169283459a -l 0 --dhchap-secret DHHC-1:01:OTdlOWYyOTEzYTYxOThiMjE5ZmM2NThmOTY2YmJmZTg53tHC: --dhchap-ctrl-secret DHHC-1:02:ZThlN2MwNmMxNTdkMjhlYTU2YjU2MTY0ZmIwMThlYzFkZjAwNTU3YTdhMTQwNDU0RjLgqQ==: 00:14:30.206 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:30.206 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:30.206 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:14:30.206 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.206 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.206 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.206 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:30.206 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:30.206 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:30.464 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:14:30.464 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:30.464 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:30.464 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:30.464 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:30.464 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:30.464 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:30.464 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.464 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.464 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.464 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:30.464 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:30.464 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:30.721 00:14:30.721 11:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:30.721 11:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:30.721 11:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:31.288 11:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:31.288 11:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:31.288 11:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.288 11:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.288 11:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.288 11:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:31.288 { 00:14:31.288 "auth": { 00:14:31.288 "dhgroup": "ffdhe4096", 00:14:31.288 "digest": "sha256", 00:14:31.288 "state": "completed" 00:14:31.288 }, 00:14:31.288 "cntlid": 29, 00:14:31.288 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a", 00:14:31.288 "listen_address": { 00:14:31.288 "adrfam": "IPv4", 00:14:31.288 "traddr": "10.0.0.3", 00:14:31.288 "trsvcid": "4420", 00:14:31.288 "trtype": "TCP" 00:14:31.288 }, 00:14:31.288 "peer_address": { 00:14:31.288 "adrfam": "IPv4", 00:14:31.288 "traddr": "10.0.0.1", 00:14:31.288 "trsvcid": "59506", 00:14:31.288 "trtype": "TCP" 00:14:31.288 }, 00:14:31.288 "qid": 0, 00:14:31.288 "state": "enabled", 00:14:31.288 "thread": "nvmf_tgt_poll_group_000" 00:14:31.288 } 00:14:31.288 ]' 00:14:31.288 11:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:31.288 11:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:31.288 11:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:31.288 11:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:31.288 11:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:31.288 11:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:31.288 11:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:31.288 11:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:31.546 11:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODhkNTRmMmE1ZmNkZmY0ZDExY2UyMjZlNjZlZDI4MzAwNzA5YmUxMmE5YWMyYWQzAlDRWQ==: --dhchap-ctrl-secret DHHC-1:01:ZmExZTRhMGYzNzhkMWQ2OTBkMjIyMzE3OWFiZjhkMTkzDxnU: 00:14:31.546 11:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid 806d442c-08ba-4719-b76d-33169283459a -l 0 --dhchap-secret DHHC-1:02:ODhkNTRmMmE1ZmNkZmY0ZDExY2UyMjZlNjZlZDI4MzAwNzA5YmUxMmE5YWMyYWQzAlDRWQ==: --dhchap-ctrl-secret DHHC-1:01:ZmExZTRhMGYzNzhkMWQ2OTBkMjIyMzE3OWFiZjhkMTkzDxnU: 00:14:32.482 11:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:32.482 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:32.482 11:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:14:32.482 11:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.482 11:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.482 11:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.482 11:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:32.482 11:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:32.482 11:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:32.828 11:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:14:32.828 11:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:32.828 11:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:32.828 11:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:32.828 11:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:32.828 11:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:32.828 11:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key3 00:14:32.828 11:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.828 11:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.828 11:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.828 11:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:32.828 11:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:32.828 11:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:33.118 00:14:33.118 11:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:33.118 11:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:33.118 11:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:33.376 11:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:33.376 11:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:33.376 11:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.376 11:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.376 11:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.376 11:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:33.376 { 00:14:33.376 "auth": { 00:14:33.376 "dhgroup": "ffdhe4096", 00:14:33.376 "digest": "sha256", 00:14:33.376 "state": "completed" 00:14:33.376 }, 00:14:33.376 "cntlid": 31, 00:14:33.376 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a", 00:14:33.376 "listen_address": { 00:14:33.376 "adrfam": "IPv4", 00:14:33.376 "traddr": "10.0.0.3", 00:14:33.376 "trsvcid": "4420", 00:14:33.376 "trtype": "TCP" 00:14:33.376 }, 00:14:33.376 "peer_address": { 00:14:33.376 "adrfam": "IPv4", 00:14:33.376 "traddr": "10.0.0.1", 00:14:33.376 "trsvcid": "59516", 00:14:33.376 "trtype": "TCP" 00:14:33.376 }, 00:14:33.376 "qid": 0, 00:14:33.376 "state": "enabled", 00:14:33.376 "thread": "nvmf_tgt_poll_group_000" 00:14:33.376 } 00:14:33.376 ]' 00:14:33.376 11:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:33.376 11:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:33.376 11:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:33.636 11:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:33.636 11:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:33.636 11:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:33.636 11:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:33.636 11:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:33.894 11:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODUxYTg2NGMwMGE3ZWM0ODJlYmZkMzVkNDkwOGFmZWViNzcwOTJlM2FlOTM1MmRjMzZjMWRlMTQwYzZjNTgwZnmdTmU=: 00:14:33.894 11:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid 806d442c-08ba-4719-b76d-33169283459a -l 0 --dhchap-secret DHHC-1:03:ODUxYTg2NGMwMGE3ZWM0ODJlYmZkMzVkNDkwOGFmZWViNzcwOTJlM2FlOTM1MmRjMzZjMWRlMTQwYzZjNTgwZnmdTmU=: 00:14:34.461 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:34.461 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:34.461 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:14:34.461 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.461 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.461 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.461 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:34.461 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:34.461 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:34.461 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:35.029 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:14:35.029 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:35.029 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:35.029 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:35.029 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:35.029 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:35.030 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:35.030 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.030 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.030 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.030 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:35.030 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:35.030 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:35.288 00:14:35.288 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:35.288 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:35.288 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:35.855 11:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:35.855 11:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:35.855 11:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.855 11:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.855 11:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.855 11:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:35.855 { 00:14:35.855 "auth": { 00:14:35.855 "dhgroup": "ffdhe6144", 00:14:35.855 "digest": "sha256", 00:14:35.855 "state": "completed" 00:14:35.855 }, 00:14:35.855 "cntlid": 33, 00:14:35.855 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a", 00:14:35.855 "listen_address": { 00:14:35.855 "adrfam": "IPv4", 00:14:35.855 "traddr": "10.0.0.3", 00:14:35.855 "trsvcid": "4420", 00:14:35.855 "trtype": "TCP" 00:14:35.855 }, 00:14:35.855 "peer_address": { 00:14:35.855 "adrfam": "IPv4", 00:14:35.855 "traddr": "10.0.0.1", 00:14:35.855 "trsvcid": "44588", 00:14:35.855 "trtype": "TCP" 00:14:35.855 }, 00:14:35.855 "qid": 0, 00:14:35.855 "state": "enabled", 00:14:35.855 "thread": "nvmf_tgt_poll_group_000" 00:14:35.855 } 00:14:35.855 ]' 00:14:35.855 11:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:35.855 11:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:35.855 11:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:35.855 11:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:35.855 11:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:35.855 11:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:35.855 11:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:35.855 11:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:36.423 11:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTU4ZTYwMjZmNWEyYzgzYjY3ZWU1MmE4ZjU0YzRmMjAwM2Q1ZGY4ZmVkZmVkZWY0w3W3ew==: --dhchap-ctrl-secret DHHC-1:03:OTgwNWJkMjE1MTUxNjMxNjRkZTlhZmI3YTJjMjIyN2RhYzM4MjEwYmYxZmI5NWFkZGQ5MTQ2M2U2NWUyYzQ3N6HE8a4=: 00:14:36.423 11:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid 806d442c-08ba-4719-b76d-33169283459a -l 0 --dhchap-secret DHHC-1:00:YTU4ZTYwMjZmNWEyYzgzYjY3ZWU1MmE4ZjU0YzRmMjAwM2Q1ZGY4ZmVkZmVkZWY0w3W3ew==: --dhchap-ctrl-secret DHHC-1:03:OTgwNWJkMjE1MTUxNjMxNjRkZTlhZmI3YTJjMjIyN2RhYzM4MjEwYmYxZmI5NWFkZGQ5MTQ2M2U2NWUyYzQ3N6HE8a4=: 00:14:36.990 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:36.990 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:36.990 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:14:36.990 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.990 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.990 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.990 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:36.990 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:36.990 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:37.249 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:14:37.249 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:37.249 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:37.249 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:37.249 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:37.249 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:37.249 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:37.249 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.249 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.249 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.249 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:37.250 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:37.250 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:37.816 00:14:37.817 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:37.817 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:37.817 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:38.383 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:38.383 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:38.383 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.383 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.383 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.383 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:38.383 { 00:14:38.383 "auth": { 00:14:38.383 "dhgroup": "ffdhe6144", 00:14:38.383 "digest": "sha256", 00:14:38.383 "state": "completed" 00:14:38.383 }, 00:14:38.383 "cntlid": 35, 00:14:38.383 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a", 00:14:38.383 "listen_address": { 00:14:38.383 "adrfam": "IPv4", 00:14:38.383 "traddr": "10.0.0.3", 00:14:38.383 "trsvcid": "4420", 00:14:38.383 "trtype": "TCP" 00:14:38.383 }, 00:14:38.384 "peer_address": { 00:14:38.384 "adrfam": "IPv4", 00:14:38.384 "traddr": "10.0.0.1", 00:14:38.384 "trsvcid": "44604", 00:14:38.384 "trtype": "TCP" 00:14:38.384 }, 00:14:38.384 "qid": 0, 00:14:38.384 "state": "enabled", 00:14:38.384 "thread": "nvmf_tgt_poll_group_000" 00:14:38.384 } 00:14:38.384 ]' 00:14:38.384 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:38.384 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:38.384 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:38.384 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:38.384 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:38.384 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:38.384 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:38.384 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:38.951 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTdlOWYyOTEzYTYxOThiMjE5ZmM2NThmOTY2YmJmZTg53tHC: --dhchap-ctrl-secret DHHC-1:02:ZThlN2MwNmMxNTdkMjhlYTU2YjU2MTY0ZmIwMThlYzFkZjAwNTU3YTdhMTQwNDU0RjLgqQ==: 00:14:38.951 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid 806d442c-08ba-4719-b76d-33169283459a -l 0 --dhchap-secret DHHC-1:01:OTdlOWYyOTEzYTYxOThiMjE5ZmM2NThmOTY2YmJmZTg53tHC: --dhchap-ctrl-secret DHHC-1:02:ZThlN2MwNmMxNTdkMjhlYTU2YjU2MTY0ZmIwMThlYzFkZjAwNTU3YTdhMTQwNDU0RjLgqQ==: 00:14:39.518 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:39.518 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:39.518 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:14:39.518 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.518 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.518 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.518 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:39.518 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:39.518 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:39.837 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:14:39.837 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:39.837 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:39.837 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:39.837 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:39.837 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:39.837 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:39.837 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.837 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.838 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.838 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:39.838 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:39.838 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:40.403 00:14:40.403 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:40.403 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:40.403 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:40.660 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:40.660 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:40.660 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.660 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.660 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.660 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:40.660 { 00:14:40.660 "auth": { 00:14:40.660 "dhgroup": "ffdhe6144", 00:14:40.660 "digest": "sha256", 00:14:40.660 "state": "completed" 00:14:40.660 }, 00:14:40.660 "cntlid": 37, 00:14:40.660 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a", 00:14:40.660 "listen_address": { 00:14:40.660 "adrfam": "IPv4", 00:14:40.660 "traddr": "10.0.0.3", 00:14:40.660 "trsvcid": "4420", 00:14:40.660 "trtype": "TCP" 00:14:40.660 }, 00:14:40.660 "peer_address": { 00:14:40.660 "adrfam": "IPv4", 00:14:40.660 "traddr": "10.0.0.1", 00:14:40.660 "trsvcid": "44628", 00:14:40.660 "trtype": "TCP" 00:14:40.660 }, 00:14:40.660 "qid": 0, 00:14:40.660 "state": "enabled", 00:14:40.660 "thread": "nvmf_tgt_poll_group_000" 00:14:40.660 } 00:14:40.660 ]' 00:14:40.660 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:40.660 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:40.660 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:40.660 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:40.660 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:40.917 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:40.917 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:40.917 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:41.174 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODhkNTRmMmE1ZmNkZmY0ZDExY2UyMjZlNjZlZDI4MzAwNzA5YmUxMmE5YWMyYWQzAlDRWQ==: --dhchap-ctrl-secret DHHC-1:01:ZmExZTRhMGYzNzhkMWQ2OTBkMjIyMzE3OWFiZjhkMTkzDxnU: 00:14:41.174 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid 806d442c-08ba-4719-b76d-33169283459a -l 0 --dhchap-secret DHHC-1:02:ODhkNTRmMmE1ZmNkZmY0ZDExY2UyMjZlNjZlZDI4MzAwNzA5YmUxMmE5YWMyYWQzAlDRWQ==: --dhchap-ctrl-secret DHHC-1:01:ZmExZTRhMGYzNzhkMWQ2OTBkMjIyMzE3OWFiZjhkMTkzDxnU: 00:14:41.741 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:41.741 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:41.741 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:14:41.741 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.741 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.741 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.741 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:41.741 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:41.741 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:42.001 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:14:42.001 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:42.001 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:42.001 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:42.001 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:42.001 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:42.001 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key3 00:14:42.001 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.001 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.001 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.001 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:42.001 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:42.001 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:42.578 00:14:42.578 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:42.578 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:42.578 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:42.844 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:43.112 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:43.112 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.112 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.112 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.112 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:43.112 { 00:14:43.112 "auth": { 00:14:43.112 "dhgroup": "ffdhe6144", 00:14:43.112 "digest": "sha256", 00:14:43.112 "state": "completed" 00:14:43.112 }, 00:14:43.112 "cntlid": 39, 00:14:43.112 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a", 00:14:43.112 "listen_address": { 00:14:43.112 "adrfam": "IPv4", 00:14:43.112 "traddr": "10.0.0.3", 00:14:43.112 "trsvcid": "4420", 00:14:43.112 "trtype": "TCP" 00:14:43.112 }, 00:14:43.112 "peer_address": { 00:14:43.112 "adrfam": "IPv4", 00:14:43.112 "traddr": "10.0.0.1", 00:14:43.112 "trsvcid": "44644", 00:14:43.112 "trtype": "TCP" 00:14:43.112 }, 00:14:43.112 "qid": 0, 00:14:43.112 "state": "enabled", 00:14:43.112 "thread": "nvmf_tgt_poll_group_000" 00:14:43.112 } 00:14:43.112 ]' 00:14:43.112 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:43.112 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:43.112 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:43.112 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:43.112 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:43.112 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:43.112 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:43.112 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:43.383 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODUxYTg2NGMwMGE3ZWM0ODJlYmZkMzVkNDkwOGFmZWViNzcwOTJlM2FlOTM1MmRjMzZjMWRlMTQwYzZjNTgwZnmdTmU=: 00:14:43.383 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid 806d442c-08ba-4719-b76d-33169283459a -l 0 --dhchap-secret DHHC-1:03:ODUxYTg2NGMwMGE3ZWM0ODJlYmZkMzVkNDkwOGFmZWViNzcwOTJlM2FlOTM1MmRjMzZjMWRlMTQwYzZjNTgwZnmdTmU=: 00:14:44.336 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:44.336 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:44.336 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:14:44.336 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.336 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.336 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.336 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:44.336 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:44.336 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:44.336 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:44.595 11:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:14:44.595 11:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:44.595 11:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:44.595 11:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:44.595 11:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:44.595 11:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:44.595 11:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:44.595 11:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.595 11:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.595 11:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.595 11:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:44.595 11:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:44.595 11:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:45.532 00:14:45.532 11:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:45.532 11:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:45.532 11:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:45.795 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:45.795 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:45.795 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.795 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.795 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.795 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:45.795 { 00:14:45.795 "auth": { 00:14:45.795 "dhgroup": "ffdhe8192", 00:14:45.795 "digest": "sha256", 00:14:45.795 "state": "completed" 00:14:45.795 }, 00:14:45.795 "cntlid": 41, 00:14:45.795 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a", 00:14:45.795 "listen_address": { 00:14:45.795 "adrfam": "IPv4", 00:14:45.795 "traddr": "10.0.0.3", 00:14:45.795 "trsvcid": "4420", 00:14:45.795 "trtype": "TCP" 00:14:45.795 }, 00:14:45.795 "peer_address": { 00:14:45.795 "adrfam": "IPv4", 00:14:45.795 "traddr": "10.0.0.1", 00:14:45.795 "trsvcid": "52354", 00:14:45.795 "trtype": "TCP" 00:14:45.795 }, 00:14:45.795 "qid": 0, 00:14:45.795 "state": "enabled", 00:14:45.795 "thread": "nvmf_tgt_poll_group_000" 00:14:45.795 } 00:14:45.795 ]' 00:14:45.795 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:45.795 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:45.795 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:45.795 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:45.795 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:45.795 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:45.795 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:45.795 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:46.053 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTU4ZTYwMjZmNWEyYzgzYjY3ZWU1MmE4ZjU0YzRmMjAwM2Q1ZGY4ZmVkZmVkZWY0w3W3ew==: --dhchap-ctrl-secret DHHC-1:03:OTgwNWJkMjE1MTUxNjMxNjRkZTlhZmI3YTJjMjIyN2RhYzM4MjEwYmYxZmI5NWFkZGQ5MTQ2M2U2NWUyYzQ3N6HE8a4=: 00:14:46.053 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid 806d442c-08ba-4719-b76d-33169283459a -l 0 --dhchap-secret DHHC-1:00:YTU4ZTYwMjZmNWEyYzgzYjY3ZWU1MmE4ZjU0YzRmMjAwM2Q1ZGY4ZmVkZmVkZWY0w3W3ew==: --dhchap-ctrl-secret DHHC-1:03:OTgwNWJkMjE1MTUxNjMxNjRkZTlhZmI3YTJjMjIyN2RhYzM4MjEwYmYxZmI5NWFkZGQ5MTQ2M2U2NWUyYzQ3N6HE8a4=: 00:14:46.985 11:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:46.985 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:46.985 11:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:14:46.985 11:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.985 11:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.985 11:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.985 11:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:46.985 11:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:46.985 11:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:47.243 11:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:14:47.243 11:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:47.243 11:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:47.243 11:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:47.243 11:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:47.243 11:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:47.243 11:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:47.243 11:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.243 11:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.243 11:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.243 11:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:47.243 11:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:47.243 11:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:47.807 00:14:47.807 11:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:47.807 11:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:47.807 11:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:48.372 11:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:48.372 11:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:48.372 11:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.372 11:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.372 11:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.372 11:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:48.372 { 00:14:48.372 "auth": { 00:14:48.372 "dhgroup": "ffdhe8192", 00:14:48.372 "digest": "sha256", 00:14:48.372 "state": "completed" 00:14:48.372 }, 00:14:48.372 "cntlid": 43, 00:14:48.372 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a", 00:14:48.372 "listen_address": { 00:14:48.372 "adrfam": "IPv4", 00:14:48.372 "traddr": "10.0.0.3", 00:14:48.372 "trsvcid": "4420", 00:14:48.372 "trtype": "TCP" 00:14:48.372 }, 00:14:48.372 "peer_address": { 00:14:48.372 "adrfam": "IPv4", 00:14:48.372 "traddr": "10.0.0.1", 00:14:48.372 "trsvcid": "52372", 00:14:48.372 "trtype": "TCP" 00:14:48.372 }, 00:14:48.372 "qid": 0, 00:14:48.372 "state": "enabled", 00:14:48.372 "thread": "nvmf_tgt_poll_group_000" 00:14:48.372 } 00:14:48.372 ]' 00:14:48.373 11:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:48.373 11:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:48.373 11:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:48.373 11:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:48.373 11:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:48.373 11:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:48.373 11:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:48.373 11:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:48.630 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTdlOWYyOTEzYTYxOThiMjE5ZmM2NThmOTY2YmJmZTg53tHC: --dhchap-ctrl-secret DHHC-1:02:ZThlN2MwNmMxNTdkMjhlYTU2YjU2MTY0ZmIwMThlYzFkZjAwNTU3YTdhMTQwNDU0RjLgqQ==: 00:14:48.630 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid 806d442c-08ba-4719-b76d-33169283459a -l 0 --dhchap-secret DHHC-1:01:OTdlOWYyOTEzYTYxOThiMjE5ZmM2NThmOTY2YmJmZTg53tHC: --dhchap-ctrl-secret DHHC-1:02:ZThlN2MwNmMxNTdkMjhlYTU2YjU2MTY0ZmIwMThlYzFkZjAwNTU3YTdhMTQwNDU0RjLgqQ==: 00:14:49.581 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:49.581 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:49.581 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:14:49.581 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.581 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.581 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.581 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:49.581 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:49.581 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:49.840 11:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:14:49.840 11:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:49.840 11:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:49.840 11:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:49.840 11:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:49.840 11:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:49.840 11:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:49.840 11:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.840 11:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.840 11:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.840 11:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:49.840 11:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:49.840 11:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:50.406 00:14:50.406 11:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:50.406 11:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:50.406 11:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:50.664 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:50.664 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:50.664 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.664 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.664 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.664 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:50.664 { 00:14:50.664 "auth": { 00:14:50.664 "dhgroup": "ffdhe8192", 00:14:50.664 "digest": "sha256", 00:14:50.664 "state": "completed" 00:14:50.664 }, 00:14:50.664 "cntlid": 45, 00:14:50.664 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a", 00:14:50.664 "listen_address": { 00:14:50.664 "adrfam": "IPv4", 00:14:50.664 "traddr": "10.0.0.3", 00:14:50.664 "trsvcid": "4420", 00:14:50.664 "trtype": "TCP" 00:14:50.664 }, 00:14:50.664 "peer_address": { 00:14:50.664 "adrfam": "IPv4", 00:14:50.664 "traddr": "10.0.0.1", 00:14:50.664 "trsvcid": "52408", 00:14:50.664 "trtype": "TCP" 00:14:50.664 }, 00:14:50.664 "qid": 0, 00:14:50.664 "state": "enabled", 00:14:50.664 "thread": "nvmf_tgt_poll_group_000" 00:14:50.664 } 00:14:50.664 ]' 00:14:50.664 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:50.922 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:50.922 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:50.922 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:50.922 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:50.922 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:50.922 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:50.922 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:51.181 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODhkNTRmMmE1ZmNkZmY0ZDExY2UyMjZlNjZlZDI4MzAwNzA5YmUxMmE5YWMyYWQzAlDRWQ==: --dhchap-ctrl-secret DHHC-1:01:ZmExZTRhMGYzNzhkMWQ2OTBkMjIyMzE3OWFiZjhkMTkzDxnU: 00:14:51.181 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid 806d442c-08ba-4719-b76d-33169283459a -l 0 --dhchap-secret DHHC-1:02:ODhkNTRmMmE1ZmNkZmY0ZDExY2UyMjZlNjZlZDI4MzAwNzA5YmUxMmE5YWMyYWQzAlDRWQ==: --dhchap-ctrl-secret DHHC-1:01:ZmExZTRhMGYzNzhkMWQ2OTBkMjIyMzE3OWFiZjhkMTkzDxnU: 00:14:52.142 11:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:52.142 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:52.142 11:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:14:52.142 11:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.142 11:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.142 11:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.142 11:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:52.142 11:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:52.142 11:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:52.402 11:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:14:52.402 11:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:52.402 11:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:52.402 11:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:52.402 11:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:52.403 11:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:52.403 11:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key3 00:14:52.403 11:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.403 11:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.403 11:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.403 11:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:52.403 11:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:52.403 11:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:52.970 00:14:52.970 11:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:52.970 11:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:52.970 11:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:53.228 11:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:53.228 11:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:53.228 11:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.228 11:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.229 11:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.229 11:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:53.229 { 00:14:53.229 "auth": { 00:14:53.229 "dhgroup": "ffdhe8192", 00:14:53.229 "digest": "sha256", 00:14:53.229 "state": "completed" 00:14:53.229 }, 00:14:53.229 "cntlid": 47, 00:14:53.229 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a", 00:14:53.229 "listen_address": { 00:14:53.229 "adrfam": "IPv4", 00:14:53.229 "traddr": "10.0.0.3", 00:14:53.229 "trsvcid": "4420", 00:14:53.229 "trtype": "TCP" 00:14:53.229 }, 00:14:53.229 "peer_address": { 00:14:53.229 "adrfam": "IPv4", 00:14:53.229 "traddr": "10.0.0.1", 00:14:53.229 "trsvcid": "52446", 00:14:53.229 "trtype": "TCP" 00:14:53.229 }, 00:14:53.229 "qid": 0, 00:14:53.229 "state": "enabled", 00:14:53.229 "thread": "nvmf_tgt_poll_group_000" 00:14:53.229 } 00:14:53.229 ]' 00:14:53.229 11:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:53.490 11:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:53.490 11:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:53.490 11:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:53.490 11:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:53.490 11:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:53.490 11:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:53.490 11:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:53.750 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODUxYTg2NGMwMGE3ZWM0ODJlYmZkMzVkNDkwOGFmZWViNzcwOTJlM2FlOTM1MmRjMzZjMWRlMTQwYzZjNTgwZnmdTmU=: 00:14:53.750 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid 806d442c-08ba-4719-b76d-33169283459a -l 0 --dhchap-secret DHHC-1:03:ODUxYTg2NGMwMGE3ZWM0ODJlYmZkMzVkNDkwOGFmZWViNzcwOTJlM2FlOTM1MmRjMzZjMWRlMTQwYzZjNTgwZnmdTmU=: 00:14:54.753 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:54.753 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:54.753 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:14:54.753 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.753 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.753 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.753 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:14:54.753 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:54.753 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:54.753 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:54.753 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:55.032 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:14:55.032 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:55.032 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:55.032 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:55.032 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:55.032 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:55.032 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:55.032 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.032 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.032 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.032 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:55.032 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:55.032 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:55.291 00:14:55.291 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:55.291 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:55.291 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:55.549 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:55.549 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:55.549 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.549 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.549 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.549 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:55.549 { 00:14:55.549 "auth": { 00:14:55.549 "dhgroup": "null", 00:14:55.549 "digest": "sha384", 00:14:55.549 "state": "completed" 00:14:55.549 }, 00:14:55.549 "cntlid": 49, 00:14:55.549 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a", 00:14:55.549 "listen_address": { 00:14:55.549 "adrfam": "IPv4", 00:14:55.549 "traddr": "10.0.0.3", 00:14:55.549 "trsvcid": "4420", 00:14:55.549 "trtype": "TCP" 00:14:55.549 }, 00:14:55.549 "peer_address": { 00:14:55.549 "adrfam": "IPv4", 00:14:55.549 "traddr": "10.0.0.1", 00:14:55.549 "trsvcid": "34472", 00:14:55.549 "trtype": "TCP" 00:14:55.549 }, 00:14:55.549 "qid": 0, 00:14:55.549 "state": "enabled", 00:14:55.549 "thread": "nvmf_tgt_poll_group_000" 00:14:55.549 } 00:14:55.549 ]' 00:14:55.549 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:55.549 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:55.549 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:55.808 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:55.808 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:55.808 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:55.808 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:55.808 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:56.066 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTU4ZTYwMjZmNWEyYzgzYjY3ZWU1MmE4ZjU0YzRmMjAwM2Q1ZGY4ZmVkZmVkZWY0w3W3ew==: --dhchap-ctrl-secret DHHC-1:03:OTgwNWJkMjE1MTUxNjMxNjRkZTlhZmI3YTJjMjIyN2RhYzM4MjEwYmYxZmI5NWFkZGQ5MTQ2M2U2NWUyYzQ3N6HE8a4=: 00:14:56.066 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid 806d442c-08ba-4719-b76d-33169283459a -l 0 --dhchap-secret DHHC-1:00:YTU4ZTYwMjZmNWEyYzgzYjY3ZWU1MmE4ZjU0YzRmMjAwM2Q1ZGY4ZmVkZmVkZWY0w3W3ew==: --dhchap-ctrl-secret DHHC-1:03:OTgwNWJkMjE1MTUxNjMxNjRkZTlhZmI3YTJjMjIyN2RhYzM4MjEwYmYxZmI5NWFkZGQ5MTQ2M2U2NWUyYzQ3N6HE8a4=: 00:14:57.001 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:57.001 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:57.001 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:14:57.001 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.001 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.001 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.001 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:57.001 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:57.001 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:57.001 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:14:57.001 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:57.001 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:57.001 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:57.001 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:57.001 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:57.001 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:57.001 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.001 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.259 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.259 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:57.259 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:57.259 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:57.517 00:14:57.517 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:57.517 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:57.517 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:57.776 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:57.776 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:57.776 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.776 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.776 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.776 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:57.776 { 00:14:57.776 "auth": { 00:14:57.776 "dhgroup": "null", 00:14:57.776 "digest": "sha384", 00:14:57.776 "state": "completed" 00:14:57.776 }, 00:14:57.776 "cntlid": 51, 00:14:57.776 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a", 00:14:57.776 "listen_address": { 00:14:57.776 "adrfam": "IPv4", 00:14:57.776 "traddr": "10.0.0.3", 00:14:57.776 "trsvcid": "4420", 00:14:57.776 "trtype": "TCP" 00:14:57.776 }, 00:14:57.776 "peer_address": { 00:14:57.776 "adrfam": "IPv4", 00:14:57.776 "traddr": "10.0.0.1", 00:14:57.776 "trsvcid": "34496", 00:14:57.776 "trtype": "TCP" 00:14:57.776 }, 00:14:57.776 "qid": 0, 00:14:57.776 "state": "enabled", 00:14:57.776 "thread": "nvmf_tgt_poll_group_000" 00:14:57.776 } 00:14:57.776 ]' 00:14:57.776 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:57.776 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:57.776 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:58.035 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:58.035 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:58.035 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:58.035 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:58.035 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:58.293 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTdlOWYyOTEzYTYxOThiMjE5ZmM2NThmOTY2YmJmZTg53tHC: --dhchap-ctrl-secret DHHC-1:02:ZThlN2MwNmMxNTdkMjhlYTU2YjU2MTY0ZmIwMThlYzFkZjAwNTU3YTdhMTQwNDU0RjLgqQ==: 00:14:58.293 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid 806d442c-08ba-4719-b76d-33169283459a -l 0 --dhchap-secret DHHC-1:01:OTdlOWYyOTEzYTYxOThiMjE5ZmM2NThmOTY2YmJmZTg53tHC: --dhchap-ctrl-secret DHHC-1:02:ZThlN2MwNmMxNTdkMjhlYTU2YjU2MTY0ZmIwMThlYzFkZjAwNTU3YTdhMTQwNDU0RjLgqQ==: 00:14:59.228 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:59.228 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:59.228 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:14:59.228 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.228 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.228 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.228 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:59.228 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:59.228 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:59.487 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:14:59.487 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:59.487 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:59.487 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:59.487 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:59.487 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:59.487 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:59.487 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.487 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.487 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.487 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:59.487 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:59.487 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:59.754 00:14:59.754 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:59.754 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:59.754 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:00.023 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:00.023 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:00.023 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.023 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.023 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.023 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:00.023 { 00:15:00.023 "auth": { 00:15:00.023 "dhgroup": "null", 00:15:00.023 "digest": "sha384", 00:15:00.023 "state": "completed" 00:15:00.023 }, 00:15:00.023 "cntlid": 53, 00:15:00.023 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a", 00:15:00.023 "listen_address": { 00:15:00.023 "adrfam": "IPv4", 00:15:00.023 "traddr": "10.0.0.3", 00:15:00.023 "trsvcid": "4420", 00:15:00.023 "trtype": "TCP" 00:15:00.023 }, 00:15:00.023 "peer_address": { 00:15:00.023 "adrfam": "IPv4", 00:15:00.023 "traddr": "10.0.0.1", 00:15:00.023 "trsvcid": "34524", 00:15:00.023 "trtype": "TCP" 00:15:00.023 }, 00:15:00.023 "qid": 0, 00:15:00.023 "state": "enabled", 00:15:00.023 "thread": "nvmf_tgt_poll_group_000" 00:15:00.023 } 00:15:00.023 ]' 00:15:00.023 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:00.023 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:00.023 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:00.281 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:00.281 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:00.281 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:00.281 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:00.281 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:00.540 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODhkNTRmMmE1ZmNkZmY0ZDExY2UyMjZlNjZlZDI4MzAwNzA5YmUxMmE5YWMyYWQzAlDRWQ==: --dhchap-ctrl-secret DHHC-1:01:ZmExZTRhMGYzNzhkMWQ2OTBkMjIyMzE3OWFiZjhkMTkzDxnU: 00:15:00.540 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid 806d442c-08ba-4719-b76d-33169283459a -l 0 --dhchap-secret DHHC-1:02:ODhkNTRmMmE1ZmNkZmY0ZDExY2UyMjZlNjZlZDI4MzAwNzA5YmUxMmE5YWMyYWQzAlDRWQ==: --dhchap-ctrl-secret DHHC-1:01:ZmExZTRhMGYzNzhkMWQ2OTBkMjIyMzE3OWFiZjhkMTkzDxnU: 00:15:01.475 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:01.475 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:01.475 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:15:01.475 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.475 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.475 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.475 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:01.475 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:01.475 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:01.733 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:15:01.733 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:01.733 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:01.733 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:01.734 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:01.734 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:01.734 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key3 00:15:01.734 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.734 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.734 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.734 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:01.734 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:01.734 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:02.301 00:15:02.302 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:02.302 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:02.302 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:02.560 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:02.560 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:02.560 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.560 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.560 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.560 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:02.560 { 00:15:02.560 "auth": { 00:15:02.560 "dhgroup": "null", 00:15:02.560 "digest": "sha384", 00:15:02.560 "state": "completed" 00:15:02.560 }, 00:15:02.560 "cntlid": 55, 00:15:02.560 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a", 00:15:02.560 "listen_address": { 00:15:02.560 "adrfam": "IPv4", 00:15:02.560 "traddr": "10.0.0.3", 00:15:02.560 "trsvcid": "4420", 00:15:02.560 "trtype": "TCP" 00:15:02.560 }, 00:15:02.560 "peer_address": { 00:15:02.560 "adrfam": "IPv4", 00:15:02.560 "traddr": "10.0.0.1", 00:15:02.560 "trsvcid": "34542", 00:15:02.560 "trtype": "TCP" 00:15:02.560 }, 00:15:02.560 "qid": 0, 00:15:02.560 "state": "enabled", 00:15:02.560 "thread": "nvmf_tgt_poll_group_000" 00:15:02.560 } 00:15:02.560 ]' 00:15:02.560 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:02.560 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:02.560 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:02.819 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:02.819 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:02.819 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:02.819 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:02.819 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:03.077 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODUxYTg2NGMwMGE3ZWM0ODJlYmZkMzVkNDkwOGFmZWViNzcwOTJlM2FlOTM1MmRjMzZjMWRlMTQwYzZjNTgwZnmdTmU=: 00:15:03.077 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid 806d442c-08ba-4719-b76d-33169283459a -l 0 --dhchap-secret DHHC-1:03:ODUxYTg2NGMwMGE3ZWM0ODJlYmZkMzVkNDkwOGFmZWViNzcwOTJlM2FlOTM1MmRjMzZjMWRlMTQwYzZjNTgwZnmdTmU=: 00:15:04.010 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:04.010 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:04.010 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:15:04.010 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.011 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.011 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.011 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:04.011 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:04.011 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:04.011 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:04.269 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:15:04.269 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:04.269 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:04.269 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:04.269 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:04.269 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:04.269 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:04.269 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.269 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.269 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.269 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:04.269 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:04.269 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:04.527 00:15:04.806 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:04.806 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:04.806 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:05.065 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:05.065 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:05.065 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.065 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.065 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.065 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:05.065 { 00:15:05.065 "auth": { 00:15:05.065 "dhgroup": "ffdhe2048", 00:15:05.065 "digest": "sha384", 00:15:05.065 "state": "completed" 00:15:05.065 }, 00:15:05.065 "cntlid": 57, 00:15:05.065 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a", 00:15:05.065 "listen_address": { 00:15:05.065 "adrfam": "IPv4", 00:15:05.065 "traddr": "10.0.0.3", 00:15:05.065 "trsvcid": "4420", 00:15:05.065 "trtype": "TCP" 00:15:05.065 }, 00:15:05.065 "peer_address": { 00:15:05.065 "adrfam": "IPv4", 00:15:05.065 "traddr": "10.0.0.1", 00:15:05.065 "trsvcid": "34570", 00:15:05.065 "trtype": "TCP" 00:15:05.065 }, 00:15:05.065 "qid": 0, 00:15:05.065 "state": "enabled", 00:15:05.065 "thread": "nvmf_tgt_poll_group_000" 00:15:05.065 } 00:15:05.065 ]' 00:15:05.065 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:05.065 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:05.065 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:05.065 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:05.065 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:05.065 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:05.065 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:05.065 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:05.323 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTU4ZTYwMjZmNWEyYzgzYjY3ZWU1MmE4ZjU0YzRmMjAwM2Q1ZGY4ZmVkZmVkZWY0w3W3ew==: --dhchap-ctrl-secret DHHC-1:03:OTgwNWJkMjE1MTUxNjMxNjRkZTlhZmI3YTJjMjIyN2RhYzM4MjEwYmYxZmI5NWFkZGQ5MTQ2M2U2NWUyYzQ3N6HE8a4=: 00:15:05.323 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid 806d442c-08ba-4719-b76d-33169283459a -l 0 --dhchap-secret DHHC-1:00:YTU4ZTYwMjZmNWEyYzgzYjY3ZWU1MmE4ZjU0YzRmMjAwM2Q1ZGY4ZmVkZmVkZWY0w3W3ew==: --dhchap-ctrl-secret DHHC-1:03:OTgwNWJkMjE1MTUxNjMxNjRkZTlhZmI3YTJjMjIyN2RhYzM4MjEwYmYxZmI5NWFkZGQ5MTQ2M2U2NWUyYzQ3N6HE8a4=: 00:15:06.259 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:06.259 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:06.259 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:15:06.259 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.259 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.259 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.259 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:06.259 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:06.259 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:06.825 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:15:06.825 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:06.825 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:06.825 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:06.825 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:06.825 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:06.825 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:06.825 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.825 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.825 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.825 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:06.825 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:06.826 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:07.084 00:15:07.084 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:07.084 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:07.084 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:07.341 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:07.341 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:07.341 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.341 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.341 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.341 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:07.341 { 00:15:07.341 "auth": { 00:15:07.341 "dhgroup": "ffdhe2048", 00:15:07.341 "digest": "sha384", 00:15:07.342 "state": "completed" 00:15:07.342 }, 00:15:07.342 "cntlid": 59, 00:15:07.342 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a", 00:15:07.342 "listen_address": { 00:15:07.342 "adrfam": "IPv4", 00:15:07.342 "traddr": "10.0.0.3", 00:15:07.342 "trsvcid": "4420", 00:15:07.342 "trtype": "TCP" 00:15:07.342 }, 00:15:07.342 "peer_address": { 00:15:07.342 "adrfam": "IPv4", 00:15:07.342 "traddr": "10.0.0.1", 00:15:07.342 "trsvcid": "39802", 00:15:07.342 "trtype": "TCP" 00:15:07.342 }, 00:15:07.342 "qid": 0, 00:15:07.342 "state": "enabled", 00:15:07.342 "thread": "nvmf_tgt_poll_group_000" 00:15:07.342 } 00:15:07.342 ]' 00:15:07.342 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:07.342 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:07.342 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:07.599 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:07.599 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:07.599 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:07.599 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:07.599 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:07.857 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTdlOWYyOTEzYTYxOThiMjE5ZmM2NThmOTY2YmJmZTg53tHC: --dhchap-ctrl-secret DHHC-1:02:ZThlN2MwNmMxNTdkMjhlYTU2YjU2MTY0ZmIwMThlYzFkZjAwNTU3YTdhMTQwNDU0RjLgqQ==: 00:15:07.857 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid 806d442c-08ba-4719-b76d-33169283459a -l 0 --dhchap-secret DHHC-1:01:OTdlOWYyOTEzYTYxOThiMjE5ZmM2NThmOTY2YmJmZTg53tHC: --dhchap-ctrl-secret DHHC-1:02:ZThlN2MwNmMxNTdkMjhlYTU2YjU2MTY0ZmIwMThlYzFkZjAwNTU3YTdhMTQwNDU0RjLgqQ==: 00:15:08.423 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:08.423 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:08.681 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:15:08.681 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.681 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.681 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.681 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:08.681 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:08.681 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:08.939 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:15:08.940 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:08.940 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:08.940 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:08.940 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:08.940 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:08.940 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:08.940 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.940 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.940 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.940 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:08.940 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:08.940 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:09.198 00:15:09.198 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:09.198 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:09.198 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:09.530 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:09.530 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:09.530 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.530 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.530 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.530 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:09.530 { 00:15:09.530 "auth": { 00:15:09.530 "dhgroup": "ffdhe2048", 00:15:09.530 "digest": "sha384", 00:15:09.530 "state": "completed" 00:15:09.530 }, 00:15:09.530 "cntlid": 61, 00:15:09.530 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a", 00:15:09.530 "listen_address": { 00:15:09.530 "adrfam": "IPv4", 00:15:09.530 "traddr": "10.0.0.3", 00:15:09.530 "trsvcid": "4420", 00:15:09.530 "trtype": "TCP" 00:15:09.530 }, 00:15:09.530 "peer_address": { 00:15:09.530 "adrfam": "IPv4", 00:15:09.530 "traddr": "10.0.0.1", 00:15:09.530 "trsvcid": "39816", 00:15:09.530 "trtype": "TCP" 00:15:09.530 }, 00:15:09.530 "qid": 0, 00:15:09.530 "state": "enabled", 00:15:09.530 "thread": "nvmf_tgt_poll_group_000" 00:15:09.530 } 00:15:09.530 ]' 00:15:09.530 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:09.803 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:09.803 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:09.803 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:09.803 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:09.803 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:09.803 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:09.803 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:10.061 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODhkNTRmMmE1ZmNkZmY0ZDExY2UyMjZlNjZlZDI4MzAwNzA5YmUxMmE5YWMyYWQzAlDRWQ==: --dhchap-ctrl-secret DHHC-1:01:ZmExZTRhMGYzNzhkMWQ2OTBkMjIyMzE3OWFiZjhkMTkzDxnU: 00:15:10.061 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid 806d442c-08ba-4719-b76d-33169283459a -l 0 --dhchap-secret DHHC-1:02:ODhkNTRmMmE1ZmNkZmY0ZDExY2UyMjZlNjZlZDI4MzAwNzA5YmUxMmE5YWMyYWQzAlDRWQ==: --dhchap-ctrl-secret DHHC-1:01:ZmExZTRhMGYzNzhkMWQ2OTBkMjIyMzE3OWFiZjhkMTkzDxnU: 00:15:10.998 11:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:10.998 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:10.998 11:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:15:10.998 11:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.998 11:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.998 11:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.998 11:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:10.998 11:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:10.998 11:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:11.256 11:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:15:11.256 11:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:11.256 11:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:11.256 11:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:11.256 11:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:11.256 11:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:11.256 11:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key3 00:15:11.256 11:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.256 11:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.256 11:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.256 11:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:11.256 11:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:11.257 11:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:11.515 00:15:11.515 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:11.515 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:11.515 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:11.774 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:11.774 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:11.774 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.774 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.774 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.774 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:11.774 { 00:15:11.774 "auth": { 00:15:11.774 "dhgroup": "ffdhe2048", 00:15:11.774 "digest": "sha384", 00:15:11.774 "state": "completed" 00:15:11.774 }, 00:15:11.774 "cntlid": 63, 00:15:11.774 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a", 00:15:11.774 "listen_address": { 00:15:11.774 "adrfam": "IPv4", 00:15:11.774 "traddr": "10.0.0.3", 00:15:11.774 "trsvcid": "4420", 00:15:11.774 "trtype": "TCP" 00:15:11.774 }, 00:15:11.774 "peer_address": { 00:15:11.774 "adrfam": "IPv4", 00:15:11.774 "traddr": "10.0.0.1", 00:15:11.774 "trsvcid": "39862", 00:15:11.774 "trtype": "TCP" 00:15:11.774 }, 00:15:11.774 "qid": 0, 00:15:11.774 "state": "enabled", 00:15:11.774 "thread": "nvmf_tgt_poll_group_000" 00:15:11.774 } 00:15:11.774 ]' 00:15:11.774 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:11.774 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:11.774 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:12.033 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:12.033 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:12.033 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:12.033 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:12.033 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:12.291 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODUxYTg2NGMwMGE3ZWM0ODJlYmZkMzVkNDkwOGFmZWViNzcwOTJlM2FlOTM1MmRjMzZjMWRlMTQwYzZjNTgwZnmdTmU=: 00:15:12.291 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid 806d442c-08ba-4719-b76d-33169283459a -l 0 --dhchap-secret DHHC-1:03:ODUxYTg2NGMwMGE3ZWM0ODJlYmZkMzVkNDkwOGFmZWViNzcwOTJlM2FlOTM1MmRjMzZjMWRlMTQwYzZjNTgwZnmdTmU=: 00:15:12.859 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:12.859 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:12.859 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:15:12.859 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.859 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.859 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.859 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:12.859 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:12.859 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:12.859 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:13.426 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:15:13.426 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:13.426 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:13.426 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:13.426 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:13.426 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:13.426 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:13.426 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.426 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.426 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.426 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:13.426 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:13.427 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:13.685 00:15:13.685 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:13.685 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:13.685 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:13.943 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:13.943 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:13.943 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.943 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.943 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.943 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:13.943 { 00:15:13.943 "auth": { 00:15:13.943 "dhgroup": "ffdhe3072", 00:15:13.943 "digest": "sha384", 00:15:13.943 "state": "completed" 00:15:13.943 }, 00:15:13.943 "cntlid": 65, 00:15:13.943 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a", 00:15:13.944 "listen_address": { 00:15:13.944 "adrfam": "IPv4", 00:15:13.944 "traddr": "10.0.0.3", 00:15:13.944 "trsvcid": "4420", 00:15:13.944 "trtype": "TCP" 00:15:13.944 }, 00:15:13.944 "peer_address": { 00:15:13.944 "adrfam": "IPv4", 00:15:13.944 "traddr": "10.0.0.1", 00:15:13.944 "trsvcid": "39890", 00:15:13.944 "trtype": "TCP" 00:15:13.944 }, 00:15:13.944 "qid": 0, 00:15:13.944 "state": "enabled", 00:15:13.944 "thread": "nvmf_tgt_poll_group_000" 00:15:13.944 } 00:15:13.944 ]' 00:15:13.944 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:14.202 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:14.202 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:14.202 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:14.202 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:14.202 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:14.202 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:14.202 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:14.485 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTU4ZTYwMjZmNWEyYzgzYjY3ZWU1MmE4ZjU0YzRmMjAwM2Q1ZGY4ZmVkZmVkZWY0w3W3ew==: --dhchap-ctrl-secret DHHC-1:03:OTgwNWJkMjE1MTUxNjMxNjRkZTlhZmI3YTJjMjIyN2RhYzM4MjEwYmYxZmI5NWFkZGQ5MTQ2M2U2NWUyYzQ3N6HE8a4=: 00:15:14.485 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid 806d442c-08ba-4719-b76d-33169283459a -l 0 --dhchap-secret DHHC-1:00:YTU4ZTYwMjZmNWEyYzgzYjY3ZWU1MmE4ZjU0YzRmMjAwM2Q1ZGY4ZmVkZmVkZWY0w3W3ew==: --dhchap-ctrl-secret DHHC-1:03:OTgwNWJkMjE1MTUxNjMxNjRkZTlhZmI3YTJjMjIyN2RhYzM4MjEwYmYxZmI5NWFkZGQ5MTQ2M2U2NWUyYzQ3N6HE8a4=: 00:15:15.437 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:15.437 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:15.437 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:15:15.437 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.437 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.437 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.437 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:15.437 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:15.437 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:15.437 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:15:15.437 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:15.437 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:15.437 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:15.437 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:15.437 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:15.437 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:15.437 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.437 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.437 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.437 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:15.437 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:15.437 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:16.004 00:15:16.004 11:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:16.004 11:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:16.004 11:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:16.263 11:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:16.263 11:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:16.263 11:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.263 11:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.263 11:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.263 11:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:16.263 { 00:15:16.263 "auth": { 00:15:16.263 "dhgroup": "ffdhe3072", 00:15:16.263 "digest": "sha384", 00:15:16.263 "state": "completed" 00:15:16.263 }, 00:15:16.263 "cntlid": 67, 00:15:16.263 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a", 00:15:16.263 "listen_address": { 00:15:16.263 "adrfam": "IPv4", 00:15:16.263 "traddr": "10.0.0.3", 00:15:16.263 "trsvcid": "4420", 00:15:16.263 "trtype": "TCP" 00:15:16.263 }, 00:15:16.263 "peer_address": { 00:15:16.263 "adrfam": "IPv4", 00:15:16.263 "traddr": "10.0.0.1", 00:15:16.263 "trsvcid": "53560", 00:15:16.263 "trtype": "TCP" 00:15:16.263 }, 00:15:16.263 "qid": 0, 00:15:16.263 "state": "enabled", 00:15:16.263 "thread": "nvmf_tgt_poll_group_000" 00:15:16.263 } 00:15:16.263 ]' 00:15:16.263 11:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:16.263 11:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:16.263 11:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:16.263 11:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:16.263 11:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:16.263 11:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:16.263 11:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:16.263 11:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:16.830 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTdlOWYyOTEzYTYxOThiMjE5ZmM2NThmOTY2YmJmZTg53tHC: --dhchap-ctrl-secret DHHC-1:02:ZThlN2MwNmMxNTdkMjhlYTU2YjU2MTY0ZmIwMThlYzFkZjAwNTU3YTdhMTQwNDU0RjLgqQ==: 00:15:16.830 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid 806d442c-08ba-4719-b76d-33169283459a -l 0 --dhchap-secret DHHC-1:01:OTdlOWYyOTEzYTYxOThiMjE5ZmM2NThmOTY2YmJmZTg53tHC: --dhchap-ctrl-secret DHHC-1:02:ZThlN2MwNmMxNTdkMjhlYTU2YjU2MTY0ZmIwMThlYzFkZjAwNTU3YTdhMTQwNDU0RjLgqQ==: 00:15:17.398 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:17.398 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:17.398 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:15:17.398 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.398 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.398 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.398 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:17.398 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:17.398 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:17.657 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:15:17.657 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:17.657 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:17.657 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:17.657 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:17.657 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:17.657 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:17.657 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.657 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.916 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.916 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:17.916 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:17.916 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:18.175 00:15:18.175 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:18.175 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:18.175 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:18.433 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:18.433 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:18.433 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.692 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.692 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.692 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:18.692 { 00:15:18.692 "auth": { 00:15:18.692 "dhgroup": "ffdhe3072", 00:15:18.692 "digest": "sha384", 00:15:18.692 "state": "completed" 00:15:18.692 }, 00:15:18.692 "cntlid": 69, 00:15:18.692 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a", 00:15:18.692 "listen_address": { 00:15:18.692 "adrfam": "IPv4", 00:15:18.692 "traddr": "10.0.0.3", 00:15:18.692 "trsvcid": "4420", 00:15:18.692 "trtype": "TCP" 00:15:18.692 }, 00:15:18.692 "peer_address": { 00:15:18.692 "adrfam": "IPv4", 00:15:18.692 "traddr": "10.0.0.1", 00:15:18.692 "trsvcid": "53578", 00:15:18.692 "trtype": "TCP" 00:15:18.692 }, 00:15:18.692 "qid": 0, 00:15:18.692 "state": "enabled", 00:15:18.692 "thread": "nvmf_tgt_poll_group_000" 00:15:18.692 } 00:15:18.692 ]' 00:15:18.692 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:18.692 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:18.692 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:18.692 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:18.692 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:18.692 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:18.692 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:18.692 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:19.258 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODhkNTRmMmE1ZmNkZmY0ZDExY2UyMjZlNjZlZDI4MzAwNzA5YmUxMmE5YWMyYWQzAlDRWQ==: --dhchap-ctrl-secret DHHC-1:01:ZmExZTRhMGYzNzhkMWQ2OTBkMjIyMzE3OWFiZjhkMTkzDxnU: 00:15:19.258 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid 806d442c-08ba-4719-b76d-33169283459a -l 0 --dhchap-secret DHHC-1:02:ODhkNTRmMmE1ZmNkZmY0ZDExY2UyMjZlNjZlZDI4MzAwNzA5YmUxMmE5YWMyYWQzAlDRWQ==: --dhchap-ctrl-secret DHHC-1:01:ZmExZTRhMGYzNzhkMWQ2OTBkMjIyMzE3OWFiZjhkMTkzDxnU: 00:15:19.826 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:19.826 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:19.826 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:15:19.826 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.826 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.826 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.826 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:19.826 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:19.826 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:20.085 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:15:20.085 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:20.085 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:20.085 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:20.085 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:20.085 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:20.085 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key3 00:15:20.085 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.085 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.085 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.085 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:20.085 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:20.085 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:20.652 00:15:20.652 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:20.652 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:20.652 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:20.911 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:20.911 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:20.911 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.911 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.911 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.911 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:20.911 { 00:15:20.911 "auth": { 00:15:20.911 "dhgroup": "ffdhe3072", 00:15:20.911 "digest": "sha384", 00:15:20.911 "state": "completed" 00:15:20.911 }, 00:15:20.911 "cntlid": 71, 00:15:20.911 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a", 00:15:20.911 "listen_address": { 00:15:20.911 "adrfam": "IPv4", 00:15:20.911 "traddr": "10.0.0.3", 00:15:20.911 "trsvcid": "4420", 00:15:20.911 "trtype": "TCP" 00:15:20.911 }, 00:15:20.911 "peer_address": { 00:15:20.911 "adrfam": "IPv4", 00:15:20.911 "traddr": "10.0.0.1", 00:15:20.911 "trsvcid": "53592", 00:15:20.911 "trtype": "TCP" 00:15:20.911 }, 00:15:20.911 "qid": 0, 00:15:20.911 "state": "enabled", 00:15:20.911 "thread": "nvmf_tgt_poll_group_000" 00:15:20.911 } 00:15:20.911 ]' 00:15:20.911 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:20.911 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:20.911 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:20.911 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:20.911 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:21.169 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:21.169 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:21.169 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:21.427 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODUxYTg2NGMwMGE3ZWM0ODJlYmZkMzVkNDkwOGFmZWViNzcwOTJlM2FlOTM1MmRjMzZjMWRlMTQwYzZjNTgwZnmdTmU=: 00:15:21.427 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid 806d442c-08ba-4719-b76d-33169283459a -l 0 --dhchap-secret DHHC-1:03:ODUxYTg2NGMwMGE3ZWM0ODJlYmZkMzVkNDkwOGFmZWViNzcwOTJlM2FlOTM1MmRjMzZjMWRlMTQwYzZjNTgwZnmdTmU=: 00:15:22.361 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:22.361 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:22.361 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:15:22.361 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.361 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.361 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.361 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:22.361 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:22.361 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:22.361 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:22.619 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:15:22.619 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:22.619 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:22.619 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:22.619 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:22.619 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:22.619 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:22.619 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.619 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.619 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.619 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:22.619 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:22.619 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:23.186 00:15:23.186 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:23.186 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:23.186 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:23.444 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:23.444 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:23.444 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.444 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.444 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.444 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:23.444 { 00:15:23.444 "auth": { 00:15:23.444 "dhgroup": "ffdhe4096", 00:15:23.444 "digest": "sha384", 00:15:23.444 "state": "completed" 00:15:23.444 }, 00:15:23.444 "cntlid": 73, 00:15:23.444 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a", 00:15:23.444 "listen_address": { 00:15:23.444 "adrfam": "IPv4", 00:15:23.444 "traddr": "10.0.0.3", 00:15:23.444 "trsvcid": "4420", 00:15:23.444 "trtype": "TCP" 00:15:23.444 }, 00:15:23.444 "peer_address": { 00:15:23.444 "adrfam": "IPv4", 00:15:23.444 "traddr": "10.0.0.1", 00:15:23.444 "trsvcid": "53614", 00:15:23.444 "trtype": "TCP" 00:15:23.444 }, 00:15:23.444 "qid": 0, 00:15:23.444 "state": "enabled", 00:15:23.444 "thread": "nvmf_tgt_poll_group_000" 00:15:23.445 } 00:15:23.445 ]' 00:15:23.445 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:23.445 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:23.445 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:23.445 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:23.445 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:23.445 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:23.445 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:23.445 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:23.703 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTU4ZTYwMjZmNWEyYzgzYjY3ZWU1MmE4ZjU0YzRmMjAwM2Q1ZGY4ZmVkZmVkZWY0w3W3ew==: --dhchap-ctrl-secret DHHC-1:03:OTgwNWJkMjE1MTUxNjMxNjRkZTlhZmI3YTJjMjIyN2RhYzM4MjEwYmYxZmI5NWFkZGQ5MTQ2M2U2NWUyYzQ3N6HE8a4=: 00:15:23.703 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid 806d442c-08ba-4719-b76d-33169283459a -l 0 --dhchap-secret DHHC-1:00:YTU4ZTYwMjZmNWEyYzgzYjY3ZWU1MmE4ZjU0YzRmMjAwM2Q1ZGY4ZmVkZmVkZWY0w3W3ew==: --dhchap-ctrl-secret DHHC-1:03:OTgwNWJkMjE1MTUxNjMxNjRkZTlhZmI3YTJjMjIyN2RhYzM4MjEwYmYxZmI5NWFkZGQ5MTQ2M2U2NWUyYzQ3N6HE8a4=: 00:15:24.638 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:24.638 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:24.638 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:15:24.638 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.638 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.638 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.638 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:24.638 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:24.638 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:24.898 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:15:24.898 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:24.898 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:24.898 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:24.898 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:24.898 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:24.898 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:24.898 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.898 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.898 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.898 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:24.898 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:24.898 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:25.466 00:15:25.466 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:25.466 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:25.466 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:25.724 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:25.724 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:25.724 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.724 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.724 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.724 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:25.724 { 00:15:25.724 "auth": { 00:15:25.724 "dhgroup": "ffdhe4096", 00:15:25.724 "digest": "sha384", 00:15:25.724 "state": "completed" 00:15:25.724 }, 00:15:25.724 "cntlid": 75, 00:15:25.724 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a", 00:15:25.724 "listen_address": { 00:15:25.724 "adrfam": "IPv4", 00:15:25.724 "traddr": "10.0.0.3", 00:15:25.724 "trsvcid": "4420", 00:15:25.724 "trtype": "TCP" 00:15:25.724 }, 00:15:25.724 "peer_address": { 00:15:25.724 "adrfam": "IPv4", 00:15:25.724 "traddr": "10.0.0.1", 00:15:25.724 "trsvcid": "56456", 00:15:25.724 "trtype": "TCP" 00:15:25.724 }, 00:15:25.724 "qid": 0, 00:15:25.724 "state": "enabled", 00:15:25.724 "thread": "nvmf_tgt_poll_group_000" 00:15:25.724 } 00:15:25.724 ]' 00:15:25.724 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:25.724 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:25.724 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:25.724 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:25.724 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:25.724 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:25.724 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:25.724 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:25.983 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTdlOWYyOTEzYTYxOThiMjE5ZmM2NThmOTY2YmJmZTg53tHC: --dhchap-ctrl-secret DHHC-1:02:ZThlN2MwNmMxNTdkMjhlYTU2YjU2MTY0ZmIwMThlYzFkZjAwNTU3YTdhMTQwNDU0RjLgqQ==: 00:15:25.983 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid 806d442c-08ba-4719-b76d-33169283459a -l 0 --dhchap-secret DHHC-1:01:OTdlOWYyOTEzYTYxOThiMjE5ZmM2NThmOTY2YmJmZTg53tHC: --dhchap-ctrl-secret DHHC-1:02:ZThlN2MwNmMxNTdkMjhlYTU2YjU2MTY0ZmIwMThlYzFkZjAwNTU3YTdhMTQwNDU0RjLgqQ==: 00:15:26.918 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:26.918 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:26.918 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:15:26.918 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.918 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.918 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.918 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:26.918 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:26.918 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:27.177 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:15:27.177 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:27.177 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:27.177 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:27.177 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:27.177 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:27.177 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:27.177 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.177 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.177 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.177 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:27.177 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:27.177 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:27.743 00:15:27.743 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:27.743 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:27.743 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:28.001 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:28.001 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:28.001 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.001 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.001 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.001 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:28.001 { 00:15:28.001 "auth": { 00:15:28.001 "dhgroup": "ffdhe4096", 00:15:28.001 "digest": "sha384", 00:15:28.001 "state": "completed" 00:15:28.001 }, 00:15:28.001 "cntlid": 77, 00:15:28.001 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a", 00:15:28.001 "listen_address": { 00:15:28.001 "adrfam": "IPv4", 00:15:28.001 "traddr": "10.0.0.3", 00:15:28.001 "trsvcid": "4420", 00:15:28.001 "trtype": "TCP" 00:15:28.001 }, 00:15:28.001 "peer_address": { 00:15:28.001 "adrfam": "IPv4", 00:15:28.001 "traddr": "10.0.0.1", 00:15:28.001 "trsvcid": "56484", 00:15:28.001 "trtype": "TCP" 00:15:28.001 }, 00:15:28.001 "qid": 0, 00:15:28.001 "state": "enabled", 00:15:28.001 "thread": "nvmf_tgt_poll_group_000" 00:15:28.001 } 00:15:28.001 ]' 00:15:28.001 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:28.001 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:28.001 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:28.001 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:28.001 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:28.259 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:28.259 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:28.259 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:28.517 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODhkNTRmMmE1ZmNkZmY0ZDExY2UyMjZlNjZlZDI4MzAwNzA5YmUxMmE5YWMyYWQzAlDRWQ==: --dhchap-ctrl-secret DHHC-1:01:ZmExZTRhMGYzNzhkMWQ2OTBkMjIyMzE3OWFiZjhkMTkzDxnU: 00:15:28.517 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid 806d442c-08ba-4719-b76d-33169283459a -l 0 --dhchap-secret DHHC-1:02:ODhkNTRmMmE1ZmNkZmY0ZDExY2UyMjZlNjZlZDI4MzAwNzA5YmUxMmE5YWMyYWQzAlDRWQ==: --dhchap-ctrl-secret DHHC-1:01:ZmExZTRhMGYzNzhkMWQ2OTBkMjIyMzE3OWFiZjhkMTkzDxnU: 00:15:29.452 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:29.452 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:29.452 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:15:29.452 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.452 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.452 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.452 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:29.452 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:29.452 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:29.711 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:15:29.711 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:29.711 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:29.711 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:29.711 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:29.711 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:29.711 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key3 00:15:29.711 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.711 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.711 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.711 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:29.711 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:29.711 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:29.969 00:15:29.969 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:29.969 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:29.969 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:30.534 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:30.534 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:30.534 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.535 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.535 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.535 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:30.535 { 00:15:30.535 "auth": { 00:15:30.535 "dhgroup": "ffdhe4096", 00:15:30.535 "digest": "sha384", 00:15:30.535 "state": "completed" 00:15:30.535 }, 00:15:30.535 "cntlid": 79, 00:15:30.535 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a", 00:15:30.535 "listen_address": { 00:15:30.535 "adrfam": "IPv4", 00:15:30.535 "traddr": "10.0.0.3", 00:15:30.535 "trsvcid": "4420", 00:15:30.535 "trtype": "TCP" 00:15:30.535 }, 00:15:30.535 "peer_address": { 00:15:30.535 "adrfam": "IPv4", 00:15:30.535 "traddr": "10.0.0.1", 00:15:30.535 "trsvcid": "56514", 00:15:30.535 "trtype": "TCP" 00:15:30.535 }, 00:15:30.535 "qid": 0, 00:15:30.535 "state": "enabled", 00:15:30.535 "thread": "nvmf_tgt_poll_group_000" 00:15:30.535 } 00:15:30.535 ]' 00:15:30.535 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:30.535 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:30.535 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:30.535 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:30.535 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:30.535 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:30.535 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:30.535 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:30.792 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODUxYTg2NGMwMGE3ZWM0ODJlYmZkMzVkNDkwOGFmZWViNzcwOTJlM2FlOTM1MmRjMzZjMWRlMTQwYzZjNTgwZnmdTmU=: 00:15:30.792 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid 806d442c-08ba-4719-b76d-33169283459a -l 0 --dhchap-secret DHHC-1:03:ODUxYTg2NGMwMGE3ZWM0ODJlYmZkMzVkNDkwOGFmZWViNzcwOTJlM2FlOTM1MmRjMzZjMWRlMTQwYzZjNTgwZnmdTmU=: 00:15:31.726 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:31.726 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:31.726 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:15:31.726 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.726 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.726 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.726 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:31.726 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:31.726 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:31.726 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:31.984 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:15:31.984 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:31.984 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:31.984 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:31.984 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:31.984 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:31.984 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:31.984 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.984 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.984 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.984 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:31.984 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:31.984 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:32.551 00:15:32.551 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:32.551 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:32.551 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:32.809 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.809 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:32.809 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.809 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.809 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.809 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:32.809 { 00:15:32.809 "auth": { 00:15:32.809 "dhgroup": "ffdhe6144", 00:15:32.809 "digest": "sha384", 00:15:32.809 "state": "completed" 00:15:32.809 }, 00:15:32.809 "cntlid": 81, 00:15:32.809 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a", 00:15:32.809 "listen_address": { 00:15:32.809 "adrfam": "IPv4", 00:15:32.809 "traddr": "10.0.0.3", 00:15:32.809 "trsvcid": "4420", 00:15:32.809 "trtype": "TCP" 00:15:32.809 }, 00:15:32.809 "peer_address": { 00:15:32.809 "adrfam": "IPv4", 00:15:32.809 "traddr": "10.0.0.1", 00:15:32.809 "trsvcid": "56534", 00:15:32.809 "trtype": "TCP" 00:15:32.809 }, 00:15:32.809 "qid": 0, 00:15:32.809 "state": "enabled", 00:15:32.809 "thread": "nvmf_tgt_poll_group_000" 00:15:32.809 } 00:15:32.809 ]' 00:15:32.809 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:32.809 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:32.809 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:33.066 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:33.066 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:33.066 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:33.066 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:33.066 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:33.323 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTU4ZTYwMjZmNWEyYzgzYjY3ZWU1MmE4ZjU0YzRmMjAwM2Q1ZGY4ZmVkZmVkZWY0w3W3ew==: --dhchap-ctrl-secret DHHC-1:03:OTgwNWJkMjE1MTUxNjMxNjRkZTlhZmI3YTJjMjIyN2RhYzM4MjEwYmYxZmI5NWFkZGQ5MTQ2M2U2NWUyYzQ3N6HE8a4=: 00:15:33.324 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid 806d442c-08ba-4719-b76d-33169283459a -l 0 --dhchap-secret DHHC-1:00:YTU4ZTYwMjZmNWEyYzgzYjY3ZWU1MmE4ZjU0YzRmMjAwM2Q1ZGY4ZmVkZmVkZWY0w3W3ew==: --dhchap-ctrl-secret DHHC-1:03:OTgwNWJkMjE1MTUxNjMxNjRkZTlhZmI3YTJjMjIyN2RhYzM4MjEwYmYxZmI5NWFkZGQ5MTQ2M2U2NWUyYzQ3N6HE8a4=: 00:15:34.257 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:34.257 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:34.257 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:15:34.257 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.258 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.258 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.258 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:34.258 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:34.258 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:34.258 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:15:34.258 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:34.258 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:34.258 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:34.258 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:34.258 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:34.258 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:34.258 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.258 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.258 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.258 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:34.258 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:34.258 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:34.850 00:15:34.850 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:34.850 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:34.850 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:35.134 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.134 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:35.134 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.134 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.134 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.134 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:35.134 { 00:15:35.134 "auth": { 00:15:35.134 "dhgroup": "ffdhe6144", 00:15:35.134 "digest": "sha384", 00:15:35.134 "state": "completed" 00:15:35.134 }, 00:15:35.134 "cntlid": 83, 00:15:35.134 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a", 00:15:35.134 "listen_address": { 00:15:35.134 "adrfam": "IPv4", 00:15:35.134 "traddr": "10.0.0.3", 00:15:35.134 "trsvcid": "4420", 00:15:35.134 "trtype": "TCP" 00:15:35.134 }, 00:15:35.134 "peer_address": { 00:15:35.134 "adrfam": "IPv4", 00:15:35.134 "traddr": "10.0.0.1", 00:15:35.134 "trsvcid": "56550", 00:15:35.134 "trtype": "TCP" 00:15:35.134 }, 00:15:35.134 "qid": 0, 00:15:35.134 "state": "enabled", 00:15:35.134 "thread": "nvmf_tgt_poll_group_000" 00:15:35.134 } 00:15:35.134 ]' 00:15:35.134 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:35.134 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:35.134 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:35.393 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:35.393 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:35.393 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:35.393 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:35.393 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:35.652 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTdlOWYyOTEzYTYxOThiMjE5ZmM2NThmOTY2YmJmZTg53tHC: --dhchap-ctrl-secret DHHC-1:02:ZThlN2MwNmMxNTdkMjhlYTU2YjU2MTY0ZmIwMThlYzFkZjAwNTU3YTdhMTQwNDU0RjLgqQ==: 00:15:35.652 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid 806d442c-08ba-4719-b76d-33169283459a -l 0 --dhchap-secret DHHC-1:01:OTdlOWYyOTEzYTYxOThiMjE5ZmM2NThmOTY2YmJmZTg53tHC: --dhchap-ctrl-secret DHHC-1:02:ZThlN2MwNmMxNTdkMjhlYTU2YjU2MTY0ZmIwMThlYzFkZjAwNTU3YTdhMTQwNDU0RjLgqQ==: 00:15:36.219 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:36.219 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:36.219 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:15:36.219 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.219 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.219 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.219 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:36.219 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:36.219 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:36.786 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:15:36.786 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:36.786 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:36.786 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:36.786 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:36.786 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:36.786 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:36.786 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.786 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.786 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.786 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:36.786 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:36.787 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:37.046 00:15:37.046 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:37.046 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:37.046 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:37.612 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.613 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:37.613 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.613 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.613 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.613 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:37.613 { 00:15:37.613 "auth": { 00:15:37.613 "dhgroup": "ffdhe6144", 00:15:37.613 "digest": "sha384", 00:15:37.613 "state": "completed" 00:15:37.613 }, 00:15:37.613 "cntlid": 85, 00:15:37.613 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a", 00:15:37.613 "listen_address": { 00:15:37.613 "adrfam": "IPv4", 00:15:37.613 "traddr": "10.0.0.3", 00:15:37.613 "trsvcid": "4420", 00:15:37.613 "trtype": "TCP" 00:15:37.613 }, 00:15:37.613 "peer_address": { 00:15:37.613 "adrfam": "IPv4", 00:15:37.613 "traddr": "10.0.0.1", 00:15:37.613 "trsvcid": "53728", 00:15:37.613 "trtype": "TCP" 00:15:37.613 }, 00:15:37.613 "qid": 0, 00:15:37.613 "state": "enabled", 00:15:37.613 "thread": "nvmf_tgt_poll_group_000" 00:15:37.613 } 00:15:37.613 ]' 00:15:37.613 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:37.613 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:37.613 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:37.613 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:37.613 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:37.613 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:37.613 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:37.613 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:37.871 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODhkNTRmMmE1ZmNkZmY0ZDExY2UyMjZlNjZlZDI4MzAwNzA5YmUxMmE5YWMyYWQzAlDRWQ==: --dhchap-ctrl-secret DHHC-1:01:ZmExZTRhMGYzNzhkMWQ2OTBkMjIyMzE3OWFiZjhkMTkzDxnU: 00:15:37.871 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid 806d442c-08ba-4719-b76d-33169283459a -l 0 --dhchap-secret DHHC-1:02:ODhkNTRmMmE1ZmNkZmY0ZDExY2UyMjZlNjZlZDI4MzAwNzA5YmUxMmE5YWMyYWQzAlDRWQ==: --dhchap-ctrl-secret DHHC-1:01:ZmExZTRhMGYzNzhkMWQ2OTBkMjIyMzE3OWFiZjhkMTkzDxnU: 00:15:38.806 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:38.806 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:38.806 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:15:38.806 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.806 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.806 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.806 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:38.806 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:38.806 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:39.065 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:15:39.065 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:39.065 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:39.065 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:39.065 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:39.065 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:39.065 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key3 00:15:39.065 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.065 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.065 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.065 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:39.065 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:39.065 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:39.631 00:15:39.631 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:39.631 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:39.631 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.890 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.890 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:39.890 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.890 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.890 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.890 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:39.890 { 00:15:39.890 "auth": { 00:15:39.890 "dhgroup": "ffdhe6144", 00:15:39.890 "digest": "sha384", 00:15:39.890 "state": "completed" 00:15:39.890 }, 00:15:39.890 "cntlid": 87, 00:15:39.890 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a", 00:15:39.890 "listen_address": { 00:15:39.890 "adrfam": "IPv4", 00:15:39.890 "traddr": "10.0.0.3", 00:15:39.890 "trsvcid": "4420", 00:15:39.890 "trtype": "TCP" 00:15:39.890 }, 00:15:39.890 "peer_address": { 00:15:39.890 "adrfam": "IPv4", 00:15:39.890 "traddr": "10.0.0.1", 00:15:39.890 "trsvcid": "53752", 00:15:39.890 "trtype": "TCP" 00:15:39.890 }, 00:15:39.890 "qid": 0, 00:15:39.890 "state": "enabled", 00:15:39.890 "thread": "nvmf_tgt_poll_group_000" 00:15:39.890 } 00:15:39.890 ]' 00:15:39.890 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:39.890 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:39.890 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:39.890 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:39.890 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:40.149 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:40.149 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:40.149 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:40.408 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODUxYTg2NGMwMGE3ZWM0ODJlYmZkMzVkNDkwOGFmZWViNzcwOTJlM2FlOTM1MmRjMzZjMWRlMTQwYzZjNTgwZnmdTmU=: 00:15:40.408 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid 806d442c-08ba-4719-b76d-33169283459a -l 0 --dhchap-secret DHHC-1:03:ODUxYTg2NGMwMGE3ZWM0ODJlYmZkMzVkNDkwOGFmZWViNzcwOTJlM2FlOTM1MmRjMzZjMWRlMTQwYzZjNTgwZnmdTmU=: 00:15:41.342 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:41.342 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:41.342 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:15:41.342 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.342 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.342 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.342 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:41.342 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:41.342 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:41.342 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:41.342 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:15:41.342 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:41.342 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:41.342 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:41.342 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:41.342 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:41.342 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:41.342 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.342 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.342 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.342 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:41.342 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:41.342 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.276 00:15:42.276 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:42.276 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.276 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:42.534 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.534 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:42.534 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.534 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.534 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.534 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:42.534 { 00:15:42.534 "auth": { 00:15:42.534 "dhgroup": "ffdhe8192", 00:15:42.534 "digest": "sha384", 00:15:42.534 "state": "completed" 00:15:42.534 }, 00:15:42.535 "cntlid": 89, 00:15:42.535 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a", 00:15:42.535 "listen_address": { 00:15:42.535 "adrfam": "IPv4", 00:15:42.535 "traddr": "10.0.0.3", 00:15:42.535 "trsvcid": "4420", 00:15:42.535 "trtype": "TCP" 00:15:42.535 }, 00:15:42.535 "peer_address": { 00:15:42.535 "adrfam": "IPv4", 00:15:42.535 "traddr": "10.0.0.1", 00:15:42.535 "trsvcid": "53782", 00:15:42.535 "trtype": "TCP" 00:15:42.535 }, 00:15:42.535 "qid": 0, 00:15:42.535 "state": "enabled", 00:15:42.535 "thread": "nvmf_tgt_poll_group_000" 00:15:42.535 } 00:15:42.535 ]' 00:15:42.535 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:42.535 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:42.535 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:42.793 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:42.793 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:42.793 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:42.793 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:42.793 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:43.051 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTU4ZTYwMjZmNWEyYzgzYjY3ZWU1MmE4ZjU0YzRmMjAwM2Q1ZGY4ZmVkZmVkZWY0w3W3ew==: --dhchap-ctrl-secret DHHC-1:03:OTgwNWJkMjE1MTUxNjMxNjRkZTlhZmI3YTJjMjIyN2RhYzM4MjEwYmYxZmI5NWFkZGQ5MTQ2M2U2NWUyYzQ3N6HE8a4=: 00:15:43.051 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid 806d442c-08ba-4719-b76d-33169283459a -l 0 --dhchap-secret DHHC-1:00:YTU4ZTYwMjZmNWEyYzgzYjY3ZWU1MmE4ZjU0YzRmMjAwM2Q1ZGY4ZmVkZmVkZWY0w3W3ew==: --dhchap-ctrl-secret DHHC-1:03:OTgwNWJkMjE1MTUxNjMxNjRkZTlhZmI3YTJjMjIyN2RhYzM4MjEwYmYxZmI5NWFkZGQ5MTQ2M2U2NWUyYzQ3N6HE8a4=: 00:15:43.985 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:43.985 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:43.985 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:15:43.985 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.985 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.985 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.985 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:43.985 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:43.985 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:43.985 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:15:43.985 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:43.985 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:43.985 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:43.985 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:43.985 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:43.985 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:43.985 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.985 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.242 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.242 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.242 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.242 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.808 00:15:44.808 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:44.808 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:44.808 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:45.374 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:45.374 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:45.374 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.374 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.374 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.374 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:45.374 { 00:15:45.374 "auth": { 00:15:45.374 "dhgroup": "ffdhe8192", 00:15:45.374 "digest": "sha384", 00:15:45.374 "state": "completed" 00:15:45.374 }, 00:15:45.374 "cntlid": 91, 00:15:45.374 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a", 00:15:45.374 "listen_address": { 00:15:45.374 "adrfam": "IPv4", 00:15:45.374 "traddr": "10.0.0.3", 00:15:45.374 "trsvcid": "4420", 00:15:45.374 "trtype": "TCP" 00:15:45.374 }, 00:15:45.374 "peer_address": { 00:15:45.374 "adrfam": "IPv4", 00:15:45.374 "traddr": "10.0.0.1", 00:15:45.374 "trsvcid": "53808", 00:15:45.374 "trtype": "TCP" 00:15:45.374 }, 00:15:45.374 "qid": 0, 00:15:45.374 "state": "enabled", 00:15:45.374 "thread": "nvmf_tgt_poll_group_000" 00:15:45.374 } 00:15:45.374 ]' 00:15:45.374 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:45.374 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:45.374 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:45.374 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:45.374 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:45.374 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:45.374 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:45.374 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:45.940 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTdlOWYyOTEzYTYxOThiMjE5ZmM2NThmOTY2YmJmZTg53tHC: --dhchap-ctrl-secret DHHC-1:02:ZThlN2MwNmMxNTdkMjhlYTU2YjU2MTY0ZmIwMThlYzFkZjAwNTU3YTdhMTQwNDU0RjLgqQ==: 00:15:45.940 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid 806d442c-08ba-4719-b76d-33169283459a -l 0 --dhchap-secret DHHC-1:01:OTdlOWYyOTEzYTYxOThiMjE5ZmM2NThmOTY2YmJmZTg53tHC: --dhchap-ctrl-secret DHHC-1:02:ZThlN2MwNmMxNTdkMjhlYTU2YjU2MTY0ZmIwMThlYzFkZjAwNTU3YTdhMTQwNDU0RjLgqQ==: 00:15:46.558 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:46.558 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:46.558 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:15:46.558 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.558 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.558 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.558 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:46.558 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:46.558 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:46.816 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:15:46.816 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:46.816 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:46.816 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:46.816 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:46.816 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:46.817 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.817 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.817 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.817 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.817 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.817 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.817 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:47.750 00:15:47.750 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:47.750 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:47.750 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:48.008 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:48.008 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:48.008 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.008 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.008 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.008 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:48.008 { 00:15:48.008 "auth": { 00:15:48.008 "dhgroup": "ffdhe8192", 00:15:48.008 "digest": "sha384", 00:15:48.008 "state": "completed" 00:15:48.008 }, 00:15:48.008 "cntlid": 93, 00:15:48.008 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a", 00:15:48.008 "listen_address": { 00:15:48.008 "adrfam": "IPv4", 00:15:48.008 "traddr": "10.0.0.3", 00:15:48.008 "trsvcid": "4420", 00:15:48.008 "trtype": "TCP" 00:15:48.008 }, 00:15:48.008 "peer_address": { 00:15:48.008 "adrfam": "IPv4", 00:15:48.008 "traddr": "10.0.0.1", 00:15:48.008 "trsvcid": "37552", 00:15:48.008 "trtype": "TCP" 00:15:48.008 }, 00:15:48.008 "qid": 0, 00:15:48.008 "state": "enabled", 00:15:48.008 "thread": "nvmf_tgt_poll_group_000" 00:15:48.008 } 00:15:48.008 ]' 00:15:48.008 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:48.267 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:48.267 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:48.267 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:48.267 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:48.267 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:48.267 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:48.267 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:48.834 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODhkNTRmMmE1ZmNkZmY0ZDExY2UyMjZlNjZlZDI4MzAwNzA5YmUxMmE5YWMyYWQzAlDRWQ==: --dhchap-ctrl-secret DHHC-1:01:ZmExZTRhMGYzNzhkMWQ2OTBkMjIyMzE3OWFiZjhkMTkzDxnU: 00:15:48.834 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid 806d442c-08ba-4719-b76d-33169283459a -l 0 --dhchap-secret DHHC-1:02:ODhkNTRmMmE1ZmNkZmY0ZDExY2UyMjZlNjZlZDI4MzAwNzA5YmUxMmE5YWMyYWQzAlDRWQ==: --dhchap-ctrl-secret DHHC-1:01:ZmExZTRhMGYzNzhkMWQ2OTBkMjIyMzE3OWFiZjhkMTkzDxnU: 00:15:49.406 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:49.406 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:49.406 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:15:49.406 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.406 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.406 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.406 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:49.406 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:49.406 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:50.022 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:15:50.022 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:50.022 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:50.022 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:50.022 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:50.022 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:50.022 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key3 00:15:50.022 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.022 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.022 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.022 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:50.022 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:50.022 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:50.957 00:15:50.957 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:50.957 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:50.957 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:51.215 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.215 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:51.215 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.215 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.215 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.215 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:51.215 { 00:15:51.215 "auth": { 00:15:51.215 "dhgroup": "ffdhe8192", 00:15:51.215 "digest": "sha384", 00:15:51.215 "state": "completed" 00:15:51.215 }, 00:15:51.215 "cntlid": 95, 00:15:51.215 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a", 00:15:51.215 "listen_address": { 00:15:51.215 "adrfam": "IPv4", 00:15:51.215 "traddr": "10.0.0.3", 00:15:51.215 "trsvcid": "4420", 00:15:51.215 "trtype": "TCP" 00:15:51.215 }, 00:15:51.215 "peer_address": { 00:15:51.215 "adrfam": "IPv4", 00:15:51.215 "traddr": "10.0.0.1", 00:15:51.215 "trsvcid": "37574", 00:15:51.215 "trtype": "TCP" 00:15:51.215 }, 00:15:51.215 "qid": 0, 00:15:51.215 "state": "enabled", 00:15:51.215 "thread": "nvmf_tgt_poll_group_000" 00:15:51.215 } 00:15:51.215 ]' 00:15:51.215 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:51.215 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:51.215 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:51.215 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:51.215 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:51.215 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:51.215 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:51.215 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:51.474 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODUxYTg2NGMwMGE3ZWM0ODJlYmZkMzVkNDkwOGFmZWViNzcwOTJlM2FlOTM1MmRjMzZjMWRlMTQwYzZjNTgwZnmdTmU=: 00:15:51.474 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid 806d442c-08ba-4719-b76d-33169283459a -l 0 --dhchap-secret DHHC-1:03:ODUxYTg2NGMwMGE3ZWM0ODJlYmZkMzVkNDkwOGFmZWViNzcwOTJlM2FlOTM1MmRjMzZjMWRlMTQwYzZjNTgwZnmdTmU=: 00:15:52.409 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:52.409 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:52.409 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:15:52.409 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.409 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.409 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.409 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:52.409 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:52.409 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:52.409 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:52.409 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:52.668 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:15:52.668 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:52.668 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:52.668 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:52.668 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:52.668 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:52.668 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:52.668 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.668 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.668 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.668 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:52.668 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:52.668 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:52.927 00:15:53.186 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:53.186 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:53.186 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:53.444 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.444 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:53.444 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.444 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.444 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.444 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:53.444 { 00:15:53.444 "auth": { 00:15:53.444 "dhgroup": "null", 00:15:53.444 "digest": "sha512", 00:15:53.444 "state": "completed" 00:15:53.444 }, 00:15:53.444 "cntlid": 97, 00:15:53.444 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a", 00:15:53.444 "listen_address": { 00:15:53.444 "adrfam": "IPv4", 00:15:53.444 "traddr": "10.0.0.3", 00:15:53.444 "trsvcid": "4420", 00:15:53.444 "trtype": "TCP" 00:15:53.444 }, 00:15:53.444 "peer_address": { 00:15:53.444 "adrfam": "IPv4", 00:15:53.444 "traddr": "10.0.0.1", 00:15:53.444 "trsvcid": "37608", 00:15:53.444 "trtype": "TCP" 00:15:53.444 }, 00:15:53.444 "qid": 0, 00:15:53.444 "state": "enabled", 00:15:53.444 "thread": "nvmf_tgt_poll_group_000" 00:15:53.444 } 00:15:53.444 ]' 00:15:53.444 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:53.444 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:53.444 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:53.444 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:53.444 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:53.444 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:53.444 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:53.444 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.011 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTU4ZTYwMjZmNWEyYzgzYjY3ZWU1MmE4ZjU0YzRmMjAwM2Q1ZGY4ZmVkZmVkZWY0w3W3ew==: --dhchap-ctrl-secret DHHC-1:03:OTgwNWJkMjE1MTUxNjMxNjRkZTlhZmI3YTJjMjIyN2RhYzM4MjEwYmYxZmI5NWFkZGQ5MTQ2M2U2NWUyYzQ3N6HE8a4=: 00:15:54.011 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid 806d442c-08ba-4719-b76d-33169283459a -l 0 --dhchap-secret DHHC-1:00:YTU4ZTYwMjZmNWEyYzgzYjY3ZWU1MmE4ZjU0YzRmMjAwM2Q1ZGY4ZmVkZmVkZWY0w3W3ew==: --dhchap-ctrl-secret DHHC-1:03:OTgwNWJkMjE1MTUxNjMxNjRkZTlhZmI3YTJjMjIyN2RhYzM4MjEwYmYxZmI5NWFkZGQ5MTQ2M2U2NWUyYzQ3N6HE8a4=: 00:15:54.578 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:54.578 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:54.578 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:15:54.578 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.578 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.578 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.578 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:54.578 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:54.578 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:54.837 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:15:54.837 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:54.837 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:54.837 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:54.837 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:54.837 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:54.837 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:54.837 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.837 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.837 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.837 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:54.837 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:54.837 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.402 00:15:55.402 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:55.402 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:55.402 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:55.660 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.660 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:55.660 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.660 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.660 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.660 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:55.660 { 00:15:55.660 "auth": { 00:15:55.660 "dhgroup": "null", 00:15:55.660 "digest": "sha512", 00:15:55.660 "state": "completed" 00:15:55.660 }, 00:15:55.660 "cntlid": 99, 00:15:55.660 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a", 00:15:55.660 "listen_address": { 00:15:55.660 "adrfam": "IPv4", 00:15:55.660 "traddr": "10.0.0.3", 00:15:55.660 "trsvcid": "4420", 00:15:55.660 "trtype": "TCP" 00:15:55.660 }, 00:15:55.660 "peer_address": { 00:15:55.660 "adrfam": "IPv4", 00:15:55.660 "traddr": "10.0.0.1", 00:15:55.660 "trsvcid": "44718", 00:15:55.660 "trtype": "TCP" 00:15:55.660 }, 00:15:55.660 "qid": 0, 00:15:55.660 "state": "enabled", 00:15:55.660 "thread": "nvmf_tgt_poll_group_000" 00:15:55.660 } 00:15:55.660 ]' 00:15:55.660 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:55.660 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:55.660 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:55.660 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:55.660 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:55.919 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:55.919 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:55.919 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:56.176 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTdlOWYyOTEzYTYxOThiMjE5ZmM2NThmOTY2YmJmZTg53tHC: --dhchap-ctrl-secret DHHC-1:02:ZThlN2MwNmMxNTdkMjhlYTU2YjU2MTY0ZmIwMThlYzFkZjAwNTU3YTdhMTQwNDU0RjLgqQ==: 00:15:56.176 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid 806d442c-08ba-4719-b76d-33169283459a -l 0 --dhchap-secret DHHC-1:01:OTdlOWYyOTEzYTYxOThiMjE5ZmM2NThmOTY2YmJmZTg53tHC: --dhchap-ctrl-secret DHHC-1:02:ZThlN2MwNmMxNTdkMjhlYTU2YjU2MTY0ZmIwMThlYzFkZjAwNTU3YTdhMTQwNDU0RjLgqQ==: 00:15:57.192 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:57.192 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:57.192 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:15:57.192 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.192 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.192 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.192 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:57.192 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:57.192 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:57.459 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:15:57.459 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:57.459 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:57.459 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:57.459 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:57.459 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:57.459 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:57.459 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.459 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.459 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.459 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:57.459 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:57.459 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:57.717 00:15:57.717 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:57.717 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:57.717 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:58.284 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.284 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:58.284 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.284 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.284 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.284 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:58.284 { 00:15:58.284 "auth": { 00:15:58.284 "dhgroup": "null", 00:15:58.284 "digest": "sha512", 00:15:58.284 "state": "completed" 00:15:58.284 }, 00:15:58.284 "cntlid": 101, 00:15:58.284 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a", 00:15:58.284 "listen_address": { 00:15:58.284 "adrfam": "IPv4", 00:15:58.284 "traddr": "10.0.0.3", 00:15:58.284 "trsvcid": "4420", 00:15:58.284 "trtype": "TCP" 00:15:58.284 }, 00:15:58.284 "peer_address": { 00:15:58.284 "adrfam": "IPv4", 00:15:58.284 "traddr": "10.0.0.1", 00:15:58.284 "trsvcid": "44746", 00:15:58.284 "trtype": "TCP" 00:15:58.284 }, 00:15:58.284 "qid": 0, 00:15:58.284 "state": "enabled", 00:15:58.284 "thread": "nvmf_tgt_poll_group_000" 00:15:58.284 } 00:15:58.284 ]' 00:15:58.284 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:58.284 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:58.284 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:58.284 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:58.284 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:58.284 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:58.284 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:58.284 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:58.542 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODhkNTRmMmE1ZmNkZmY0ZDExY2UyMjZlNjZlZDI4MzAwNzA5YmUxMmE5YWMyYWQzAlDRWQ==: --dhchap-ctrl-secret DHHC-1:01:ZmExZTRhMGYzNzhkMWQ2OTBkMjIyMzE3OWFiZjhkMTkzDxnU: 00:15:58.542 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid 806d442c-08ba-4719-b76d-33169283459a -l 0 --dhchap-secret DHHC-1:02:ODhkNTRmMmE1ZmNkZmY0ZDExY2UyMjZlNjZlZDI4MzAwNzA5YmUxMmE5YWMyYWQzAlDRWQ==: --dhchap-ctrl-secret DHHC-1:01:ZmExZTRhMGYzNzhkMWQ2OTBkMjIyMzE3OWFiZjhkMTkzDxnU: 00:15:59.477 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:59.477 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:59.477 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:15:59.477 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.477 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.477 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.477 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:59.477 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:59.477 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:00.043 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:16:00.043 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:00.043 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:00.043 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:00.043 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:00.043 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:00.043 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key3 00:16:00.043 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.043 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.043 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.044 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:00.044 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:00.044 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:00.302 00:16:00.560 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:00.560 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:00.560 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:00.818 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.818 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:00.818 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.818 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.818 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.818 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:00.818 { 00:16:00.818 "auth": { 00:16:00.818 "dhgroup": "null", 00:16:00.818 "digest": "sha512", 00:16:00.818 "state": "completed" 00:16:00.818 }, 00:16:00.818 "cntlid": 103, 00:16:00.818 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a", 00:16:00.818 "listen_address": { 00:16:00.818 "adrfam": "IPv4", 00:16:00.818 "traddr": "10.0.0.3", 00:16:00.818 "trsvcid": "4420", 00:16:00.818 "trtype": "TCP" 00:16:00.818 }, 00:16:00.818 "peer_address": { 00:16:00.818 "adrfam": "IPv4", 00:16:00.818 "traddr": "10.0.0.1", 00:16:00.818 "trsvcid": "44784", 00:16:00.818 "trtype": "TCP" 00:16:00.818 }, 00:16:00.818 "qid": 0, 00:16:00.818 "state": "enabled", 00:16:00.818 "thread": "nvmf_tgt_poll_group_000" 00:16:00.818 } 00:16:00.818 ]' 00:16:00.818 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:00.818 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:00.818 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:00.818 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:00.818 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:01.077 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:01.077 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:01.077 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:01.334 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODUxYTg2NGMwMGE3ZWM0ODJlYmZkMzVkNDkwOGFmZWViNzcwOTJlM2FlOTM1MmRjMzZjMWRlMTQwYzZjNTgwZnmdTmU=: 00:16:01.334 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid 806d442c-08ba-4719-b76d-33169283459a -l 0 --dhchap-secret DHHC-1:03:ODUxYTg2NGMwMGE3ZWM0ODJlYmZkMzVkNDkwOGFmZWViNzcwOTJlM2FlOTM1MmRjMzZjMWRlMTQwYzZjNTgwZnmdTmU=: 00:16:02.269 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:02.269 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:02.269 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:16:02.269 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.269 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.269 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.269 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:02.269 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:02.269 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:02.269 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:02.835 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:16:02.835 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:02.835 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:02.835 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:02.835 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:02.835 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:02.835 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:02.835 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.835 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.835 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.835 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:02.835 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:02.835 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:03.400 00:16:03.400 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:03.400 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:03.400 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:03.660 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.660 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.660 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.660 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.660 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.660 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:03.660 { 00:16:03.660 "auth": { 00:16:03.660 "dhgroup": "ffdhe2048", 00:16:03.660 "digest": "sha512", 00:16:03.660 "state": "completed" 00:16:03.660 }, 00:16:03.660 "cntlid": 105, 00:16:03.660 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a", 00:16:03.660 "listen_address": { 00:16:03.660 "adrfam": "IPv4", 00:16:03.660 "traddr": "10.0.0.3", 00:16:03.660 "trsvcid": "4420", 00:16:03.660 "trtype": "TCP" 00:16:03.660 }, 00:16:03.660 "peer_address": { 00:16:03.660 "adrfam": "IPv4", 00:16:03.660 "traddr": "10.0.0.1", 00:16:03.660 "trsvcid": "44802", 00:16:03.660 "trtype": "TCP" 00:16:03.660 }, 00:16:03.660 "qid": 0, 00:16:03.660 "state": "enabled", 00:16:03.660 "thread": "nvmf_tgt_poll_group_000" 00:16:03.660 } 00:16:03.660 ]' 00:16:03.660 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:03.660 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:03.660 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:03.660 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:03.660 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:03.918 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.918 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.918 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:04.176 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTU4ZTYwMjZmNWEyYzgzYjY3ZWU1MmE4ZjU0YzRmMjAwM2Q1ZGY4ZmVkZmVkZWY0w3W3ew==: --dhchap-ctrl-secret DHHC-1:03:OTgwNWJkMjE1MTUxNjMxNjRkZTlhZmI3YTJjMjIyN2RhYzM4MjEwYmYxZmI5NWFkZGQ5MTQ2M2U2NWUyYzQ3N6HE8a4=: 00:16:04.176 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid 806d442c-08ba-4719-b76d-33169283459a -l 0 --dhchap-secret DHHC-1:00:YTU4ZTYwMjZmNWEyYzgzYjY3ZWU1MmE4ZjU0YzRmMjAwM2Q1ZGY4ZmVkZmVkZWY0w3W3ew==: --dhchap-ctrl-secret DHHC-1:03:OTgwNWJkMjE1MTUxNjMxNjRkZTlhZmI3YTJjMjIyN2RhYzM4MjEwYmYxZmI5NWFkZGQ5MTQ2M2U2NWUyYzQ3N6HE8a4=: 00:16:05.110 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:05.110 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:05.110 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:16:05.110 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.110 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.110 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.110 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:05.110 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:05.110 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:05.713 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:16:05.713 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:05.713 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:05.713 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:05.713 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:05.713 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:05.713 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:05.713 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.713 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.713 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.713 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:05.713 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:05.713 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:05.972 00:16:05.972 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:05.972 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:05.972 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:06.230 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.230 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:06.230 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.230 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.230 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.230 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:06.230 { 00:16:06.230 "auth": { 00:16:06.230 "dhgroup": "ffdhe2048", 00:16:06.230 "digest": "sha512", 00:16:06.230 "state": "completed" 00:16:06.230 }, 00:16:06.230 "cntlid": 107, 00:16:06.230 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a", 00:16:06.230 "listen_address": { 00:16:06.230 "adrfam": "IPv4", 00:16:06.230 "traddr": "10.0.0.3", 00:16:06.230 "trsvcid": "4420", 00:16:06.230 "trtype": "TCP" 00:16:06.230 }, 00:16:06.230 "peer_address": { 00:16:06.230 "adrfam": "IPv4", 00:16:06.230 "traddr": "10.0.0.1", 00:16:06.230 "trsvcid": "45722", 00:16:06.230 "trtype": "TCP" 00:16:06.230 }, 00:16:06.230 "qid": 0, 00:16:06.230 "state": "enabled", 00:16:06.230 "thread": "nvmf_tgt_poll_group_000" 00:16:06.230 } 00:16:06.230 ]' 00:16:06.230 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:06.489 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:06.489 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:06.489 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:06.489 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:06.489 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:06.489 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:06.489 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:06.747 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTdlOWYyOTEzYTYxOThiMjE5ZmM2NThmOTY2YmJmZTg53tHC: --dhchap-ctrl-secret DHHC-1:02:ZThlN2MwNmMxNTdkMjhlYTU2YjU2MTY0ZmIwMThlYzFkZjAwNTU3YTdhMTQwNDU0RjLgqQ==: 00:16:06.747 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid 806d442c-08ba-4719-b76d-33169283459a -l 0 --dhchap-secret DHHC-1:01:OTdlOWYyOTEzYTYxOThiMjE5ZmM2NThmOTY2YmJmZTg53tHC: --dhchap-ctrl-secret DHHC-1:02:ZThlN2MwNmMxNTdkMjhlYTU2YjU2MTY0ZmIwMThlYzFkZjAwNTU3YTdhMTQwNDU0RjLgqQ==: 00:16:07.683 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:07.683 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:07.683 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:16:07.683 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.683 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.683 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.683 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:07.683 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:07.683 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:07.942 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:16:07.942 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:07.942 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:07.942 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:07.942 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:07.942 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:07.942 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:07.942 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.942 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.942 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.942 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:07.942 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:07.942 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:08.508 00:16:08.508 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:08.508 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:08.508 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:08.508 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.508 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:08.508 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.508 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.765 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.765 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:08.765 { 00:16:08.765 "auth": { 00:16:08.765 "dhgroup": "ffdhe2048", 00:16:08.765 "digest": "sha512", 00:16:08.765 "state": "completed" 00:16:08.765 }, 00:16:08.765 "cntlid": 109, 00:16:08.765 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a", 00:16:08.765 "listen_address": { 00:16:08.765 "adrfam": "IPv4", 00:16:08.765 "traddr": "10.0.0.3", 00:16:08.765 "trsvcid": "4420", 00:16:08.765 "trtype": "TCP" 00:16:08.765 }, 00:16:08.765 "peer_address": { 00:16:08.765 "adrfam": "IPv4", 00:16:08.765 "traddr": "10.0.0.1", 00:16:08.765 "trsvcid": "45740", 00:16:08.765 "trtype": "TCP" 00:16:08.765 }, 00:16:08.765 "qid": 0, 00:16:08.765 "state": "enabled", 00:16:08.765 "thread": "nvmf_tgt_poll_group_000" 00:16:08.765 } 00:16:08.765 ]' 00:16:08.765 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:08.765 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:08.765 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:08.765 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:08.765 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:08.765 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:08.765 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:08.765 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:09.024 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODhkNTRmMmE1ZmNkZmY0ZDExY2UyMjZlNjZlZDI4MzAwNzA5YmUxMmE5YWMyYWQzAlDRWQ==: --dhchap-ctrl-secret DHHC-1:01:ZmExZTRhMGYzNzhkMWQ2OTBkMjIyMzE3OWFiZjhkMTkzDxnU: 00:16:09.024 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid 806d442c-08ba-4719-b76d-33169283459a -l 0 --dhchap-secret DHHC-1:02:ODhkNTRmMmE1ZmNkZmY0ZDExY2UyMjZlNjZlZDI4MzAwNzA5YmUxMmE5YWMyYWQzAlDRWQ==: --dhchap-ctrl-secret DHHC-1:01:ZmExZTRhMGYzNzhkMWQ2OTBkMjIyMzE3OWFiZjhkMTkzDxnU: 00:16:09.590 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:09.590 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:09.590 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:16:09.590 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.590 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.590 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.590 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:09.590 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:09.590 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:10.157 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:16:10.157 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:10.157 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:10.157 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:10.157 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:10.157 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:10.157 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key3 00:16:10.157 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.157 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.157 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.157 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:10.157 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:10.157 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:10.414 00:16:10.414 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:10.414 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:10.414 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.672 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.672 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.672 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.672 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.672 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.672 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:10.672 { 00:16:10.672 "auth": { 00:16:10.672 "dhgroup": "ffdhe2048", 00:16:10.672 "digest": "sha512", 00:16:10.672 "state": "completed" 00:16:10.672 }, 00:16:10.672 "cntlid": 111, 00:16:10.672 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a", 00:16:10.672 "listen_address": { 00:16:10.672 "adrfam": "IPv4", 00:16:10.672 "traddr": "10.0.0.3", 00:16:10.672 "trsvcid": "4420", 00:16:10.672 "trtype": "TCP" 00:16:10.672 }, 00:16:10.672 "peer_address": { 00:16:10.672 "adrfam": "IPv4", 00:16:10.672 "traddr": "10.0.0.1", 00:16:10.672 "trsvcid": "45780", 00:16:10.672 "trtype": "TCP" 00:16:10.672 }, 00:16:10.672 "qid": 0, 00:16:10.672 "state": "enabled", 00:16:10.672 "thread": "nvmf_tgt_poll_group_000" 00:16:10.672 } 00:16:10.672 ]' 00:16:10.672 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:10.672 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:10.672 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:10.930 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:10.930 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:10.930 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.930 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.930 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:11.188 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODUxYTg2NGMwMGE3ZWM0ODJlYmZkMzVkNDkwOGFmZWViNzcwOTJlM2FlOTM1MmRjMzZjMWRlMTQwYzZjNTgwZnmdTmU=: 00:16:11.188 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid 806d442c-08ba-4719-b76d-33169283459a -l 0 --dhchap-secret DHHC-1:03:ODUxYTg2NGMwMGE3ZWM0ODJlYmZkMzVkNDkwOGFmZWViNzcwOTJlM2FlOTM1MmRjMzZjMWRlMTQwYzZjNTgwZnmdTmU=: 00:16:12.123 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:12.123 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:12.123 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:16:12.123 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.123 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.123 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.123 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:12.123 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:12.123 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:12.123 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:12.380 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:16:12.380 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:12.380 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:12.380 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:12.380 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:12.380 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:12.380 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:12.380 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.380 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.380 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.380 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:12.380 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:12.380 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:12.638 00:16:12.638 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:12.638 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:12.638 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:13.271 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.271 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:13.271 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.271 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.271 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.271 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:13.271 { 00:16:13.271 "auth": { 00:16:13.271 "dhgroup": "ffdhe3072", 00:16:13.271 "digest": "sha512", 00:16:13.271 "state": "completed" 00:16:13.271 }, 00:16:13.271 "cntlid": 113, 00:16:13.271 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a", 00:16:13.271 "listen_address": { 00:16:13.271 "adrfam": "IPv4", 00:16:13.271 "traddr": "10.0.0.3", 00:16:13.271 "trsvcid": "4420", 00:16:13.271 "trtype": "TCP" 00:16:13.271 }, 00:16:13.271 "peer_address": { 00:16:13.271 "adrfam": "IPv4", 00:16:13.271 "traddr": "10.0.0.1", 00:16:13.271 "trsvcid": "45810", 00:16:13.271 "trtype": "TCP" 00:16:13.271 }, 00:16:13.271 "qid": 0, 00:16:13.271 "state": "enabled", 00:16:13.271 "thread": "nvmf_tgt_poll_group_000" 00:16:13.271 } 00:16:13.271 ]' 00:16:13.271 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:13.271 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:13.271 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:13.271 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:13.271 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:13.271 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:13.271 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:13.271 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:13.551 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTU4ZTYwMjZmNWEyYzgzYjY3ZWU1MmE4ZjU0YzRmMjAwM2Q1ZGY4ZmVkZmVkZWY0w3W3ew==: --dhchap-ctrl-secret DHHC-1:03:OTgwNWJkMjE1MTUxNjMxNjRkZTlhZmI3YTJjMjIyN2RhYzM4MjEwYmYxZmI5NWFkZGQ5MTQ2M2U2NWUyYzQ3N6HE8a4=: 00:16:13.551 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid 806d442c-08ba-4719-b76d-33169283459a -l 0 --dhchap-secret DHHC-1:00:YTU4ZTYwMjZmNWEyYzgzYjY3ZWU1MmE4ZjU0YzRmMjAwM2Q1ZGY4ZmVkZmVkZWY0w3W3ew==: --dhchap-ctrl-secret DHHC-1:03:OTgwNWJkMjE1MTUxNjMxNjRkZTlhZmI3YTJjMjIyN2RhYzM4MjEwYmYxZmI5NWFkZGQ5MTQ2M2U2NWUyYzQ3N6HE8a4=: 00:16:14.496 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:14.496 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:14.496 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:16:14.496 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.496 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.496 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.496 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:14.496 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:14.496 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:14.754 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:16:14.754 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:14.754 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:14.754 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:14.754 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:14.754 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:14.754 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.754 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.754 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.754 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.754 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.754 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.754 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:15.321 00:16:15.321 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:15.321 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.321 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:15.579 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.579 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.579 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.579 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.579 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.579 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:15.579 { 00:16:15.579 "auth": { 00:16:15.579 "dhgroup": "ffdhe3072", 00:16:15.579 "digest": "sha512", 00:16:15.579 "state": "completed" 00:16:15.579 }, 00:16:15.579 "cntlid": 115, 00:16:15.579 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a", 00:16:15.579 "listen_address": { 00:16:15.579 "adrfam": "IPv4", 00:16:15.579 "traddr": "10.0.0.3", 00:16:15.579 "trsvcid": "4420", 00:16:15.579 "trtype": "TCP" 00:16:15.579 }, 00:16:15.579 "peer_address": { 00:16:15.579 "adrfam": "IPv4", 00:16:15.579 "traddr": "10.0.0.1", 00:16:15.579 "trsvcid": "58878", 00:16:15.579 "trtype": "TCP" 00:16:15.579 }, 00:16:15.579 "qid": 0, 00:16:15.579 "state": "enabled", 00:16:15.579 "thread": "nvmf_tgt_poll_group_000" 00:16:15.579 } 00:16:15.579 ]' 00:16:15.579 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:15.579 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:15.579 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:15.579 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:15.579 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:15.579 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.579 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.579 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.146 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTdlOWYyOTEzYTYxOThiMjE5ZmM2NThmOTY2YmJmZTg53tHC: --dhchap-ctrl-secret DHHC-1:02:ZThlN2MwNmMxNTdkMjhlYTU2YjU2MTY0ZmIwMThlYzFkZjAwNTU3YTdhMTQwNDU0RjLgqQ==: 00:16:16.146 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid 806d442c-08ba-4719-b76d-33169283459a -l 0 --dhchap-secret DHHC-1:01:OTdlOWYyOTEzYTYxOThiMjE5ZmM2NThmOTY2YmJmZTg53tHC: --dhchap-ctrl-secret DHHC-1:02:ZThlN2MwNmMxNTdkMjhlYTU2YjU2MTY0ZmIwMThlYzFkZjAwNTU3YTdhMTQwNDU0RjLgqQ==: 00:16:16.713 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.713 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.713 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:16:16.713 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.713 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.713 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.713 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:16.713 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:16.713 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:16.971 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:16:16.971 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:16.971 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:16.971 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:16.971 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:16.971 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:16.971 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.971 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.971 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.971 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.971 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.971 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.971 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:17.229 00:16:17.229 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:17.229 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.229 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:17.795 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.795 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.795 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.795 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.795 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.795 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:17.795 { 00:16:17.795 "auth": { 00:16:17.795 "dhgroup": "ffdhe3072", 00:16:17.795 "digest": "sha512", 00:16:17.795 "state": "completed" 00:16:17.795 }, 00:16:17.795 "cntlid": 117, 00:16:17.795 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a", 00:16:17.795 "listen_address": { 00:16:17.795 "adrfam": "IPv4", 00:16:17.796 "traddr": "10.0.0.3", 00:16:17.796 "trsvcid": "4420", 00:16:17.796 "trtype": "TCP" 00:16:17.796 }, 00:16:17.796 "peer_address": { 00:16:17.796 "adrfam": "IPv4", 00:16:17.796 "traddr": "10.0.0.1", 00:16:17.796 "trsvcid": "58898", 00:16:17.796 "trtype": "TCP" 00:16:17.796 }, 00:16:17.796 "qid": 0, 00:16:17.796 "state": "enabled", 00:16:17.796 "thread": "nvmf_tgt_poll_group_000" 00:16:17.796 } 00:16:17.796 ]' 00:16:17.796 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:17.796 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:17.796 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:17.796 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:17.796 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:17.796 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.796 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.796 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:18.362 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODhkNTRmMmE1ZmNkZmY0ZDExY2UyMjZlNjZlZDI4MzAwNzA5YmUxMmE5YWMyYWQzAlDRWQ==: --dhchap-ctrl-secret DHHC-1:01:ZmExZTRhMGYzNzhkMWQ2OTBkMjIyMzE3OWFiZjhkMTkzDxnU: 00:16:18.362 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid 806d442c-08ba-4719-b76d-33169283459a -l 0 --dhchap-secret DHHC-1:02:ODhkNTRmMmE1ZmNkZmY0ZDExY2UyMjZlNjZlZDI4MzAwNzA5YmUxMmE5YWMyYWQzAlDRWQ==: --dhchap-ctrl-secret DHHC-1:01:ZmExZTRhMGYzNzhkMWQ2OTBkMjIyMzE3OWFiZjhkMTkzDxnU: 00:16:18.929 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.929 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.929 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:16:18.929 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.929 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.929 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.929 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:18.929 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:18.929 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:19.495 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:16:19.495 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:19.495 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:19.495 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:19.495 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:19.495 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:19.495 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key3 00:16:19.495 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.495 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.495 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.495 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:19.495 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:19.495 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:20.125 00:16:20.125 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:20.125 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:20.125 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.384 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.384 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.384 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.384 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.384 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.384 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:20.384 { 00:16:20.384 "auth": { 00:16:20.384 "dhgroup": "ffdhe3072", 00:16:20.384 "digest": "sha512", 00:16:20.384 "state": "completed" 00:16:20.384 }, 00:16:20.384 "cntlid": 119, 00:16:20.384 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a", 00:16:20.384 "listen_address": { 00:16:20.384 "adrfam": "IPv4", 00:16:20.384 "traddr": "10.0.0.3", 00:16:20.384 "trsvcid": "4420", 00:16:20.384 "trtype": "TCP" 00:16:20.384 }, 00:16:20.384 "peer_address": { 00:16:20.384 "adrfam": "IPv4", 00:16:20.384 "traddr": "10.0.0.1", 00:16:20.384 "trsvcid": "58920", 00:16:20.384 "trtype": "TCP" 00:16:20.384 }, 00:16:20.384 "qid": 0, 00:16:20.384 "state": "enabled", 00:16:20.384 "thread": "nvmf_tgt_poll_group_000" 00:16:20.384 } 00:16:20.384 ]' 00:16:20.384 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:20.384 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:20.384 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:20.643 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:20.643 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:20.643 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.643 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.643 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.901 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODUxYTg2NGMwMGE3ZWM0ODJlYmZkMzVkNDkwOGFmZWViNzcwOTJlM2FlOTM1MmRjMzZjMWRlMTQwYzZjNTgwZnmdTmU=: 00:16:20.901 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid 806d442c-08ba-4719-b76d-33169283459a -l 0 --dhchap-secret DHHC-1:03:ODUxYTg2NGMwMGE3ZWM0ODJlYmZkMzVkNDkwOGFmZWViNzcwOTJlM2FlOTM1MmRjMzZjMWRlMTQwYzZjNTgwZnmdTmU=: 00:16:21.834 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.834 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.834 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:16:21.834 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.834 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.834 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.834 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:21.834 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:21.834 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:21.834 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:22.092 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:16:22.092 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:22.092 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:22.092 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:22.092 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:22.092 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:22.092 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.092 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.092 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.092 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.092 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.092 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.092 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.350 00:16:22.607 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:22.607 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:22.607 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.865 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.865 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:22.865 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.865 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.865 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.865 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:22.865 { 00:16:22.865 "auth": { 00:16:22.865 "dhgroup": "ffdhe4096", 00:16:22.865 "digest": "sha512", 00:16:22.865 "state": "completed" 00:16:22.865 }, 00:16:22.865 "cntlid": 121, 00:16:22.865 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a", 00:16:22.865 "listen_address": { 00:16:22.865 "adrfam": "IPv4", 00:16:22.865 "traddr": "10.0.0.3", 00:16:22.865 "trsvcid": "4420", 00:16:22.865 "trtype": "TCP" 00:16:22.865 }, 00:16:22.865 "peer_address": { 00:16:22.865 "adrfam": "IPv4", 00:16:22.865 "traddr": "10.0.0.1", 00:16:22.865 "trsvcid": "58956", 00:16:22.865 "trtype": "TCP" 00:16:22.865 }, 00:16:22.865 "qid": 0, 00:16:22.865 "state": "enabled", 00:16:22.865 "thread": "nvmf_tgt_poll_group_000" 00:16:22.865 } 00:16:22.865 ]' 00:16:22.865 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:23.123 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:23.123 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:23.123 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:23.123 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:23.123 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.123 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.123 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.689 11:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTU4ZTYwMjZmNWEyYzgzYjY3ZWU1MmE4ZjU0YzRmMjAwM2Q1ZGY4ZmVkZmVkZWY0w3W3ew==: --dhchap-ctrl-secret DHHC-1:03:OTgwNWJkMjE1MTUxNjMxNjRkZTlhZmI3YTJjMjIyN2RhYzM4MjEwYmYxZmI5NWFkZGQ5MTQ2M2U2NWUyYzQ3N6HE8a4=: 00:16:23.689 11:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid 806d442c-08ba-4719-b76d-33169283459a -l 0 --dhchap-secret DHHC-1:00:YTU4ZTYwMjZmNWEyYzgzYjY3ZWU1MmE4ZjU0YzRmMjAwM2Q1ZGY4ZmVkZmVkZWY0w3W3ew==: --dhchap-ctrl-secret DHHC-1:03:OTgwNWJkMjE1MTUxNjMxNjRkZTlhZmI3YTJjMjIyN2RhYzM4MjEwYmYxZmI5NWFkZGQ5MTQ2M2U2NWUyYzQ3N6HE8a4=: 00:16:24.622 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.622 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.622 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:16:24.622 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.622 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.622 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.622 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:24.622 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:24.622 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:25.188 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:16:25.188 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:25.188 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:25.188 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:25.188 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:25.188 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.188 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.188 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.188 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.188 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.188 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.188 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.188 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.446 00:16:25.446 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:25.446 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.446 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:26.013 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.013 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.013 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.013 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.013 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.013 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:26.013 { 00:16:26.013 "auth": { 00:16:26.013 "dhgroup": "ffdhe4096", 00:16:26.013 "digest": "sha512", 00:16:26.013 "state": "completed" 00:16:26.013 }, 00:16:26.013 "cntlid": 123, 00:16:26.013 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a", 00:16:26.013 "listen_address": { 00:16:26.013 "adrfam": "IPv4", 00:16:26.013 "traddr": "10.0.0.3", 00:16:26.013 "trsvcid": "4420", 00:16:26.013 "trtype": "TCP" 00:16:26.013 }, 00:16:26.013 "peer_address": { 00:16:26.013 "adrfam": "IPv4", 00:16:26.013 "traddr": "10.0.0.1", 00:16:26.013 "trsvcid": "36186", 00:16:26.013 "trtype": "TCP" 00:16:26.013 }, 00:16:26.013 "qid": 0, 00:16:26.013 "state": "enabled", 00:16:26.013 "thread": "nvmf_tgt_poll_group_000" 00:16:26.013 } 00:16:26.013 ]' 00:16:26.013 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:26.013 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:26.013 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:26.013 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:26.013 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:26.271 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.271 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.271 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.580 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTdlOWYyOTEzYTYxOThiMjE5ZmM2NThmOTY2YmJmZTg53tHC: --dhchap-ctrl-secret DHHC-1:02:ZThlN2MwNmMxNTdkMjhlYTU2YjU2MTY0ZmIwMThlYzFkZjAwNTU3YTdhMTQwNDU0RjLgqQ==: 00:16:26.580 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid 806d442c-08ba-4719-b76d-33169283459a -l 0 --dhchap-secret DHHC-1:01:OTdlOWYyOTEzYTYxOThiMjE5ZmM2NThmOTY2YmJmZTg53tHC: --dhchap-ctrl-secret DHHC-1:02:ZThlN2MwNmMxNTdkMjhlYTU2YjU2MTY0ZmIwMThlYzFkZjAwNTU3YTdhMTQwNDU0RjLgqQ==: 00:16:27.545 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.545 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.545 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:16:27.545 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.545 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.545 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.545 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:27.545 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:27.545 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:27.804 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:16:27.804 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:27.804 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:27.804 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:27.804 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:27.804 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.805 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.805 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.805 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.805 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.805 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.805 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.805 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:28.372 00:16:28.372 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:28.372 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:28.372 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.630 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.630 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.630 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.630 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.630 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.630 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:28.630 { 00:16:28.630 "auth": { 00:16:28.630 "dhgroup": "ffdhe4096", 00:16:28.630 "digest": "sha512", 00:16:28.630 "state": "completed" 00:16:28.630 }, 00:16:28.630 "cntlid": 125, 00:16:28.630 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a", 00:16:28.630 "listen_address": { 00:16:28.630 "adrfam": "IPv4", 00:16:28.630 "traddr": "10.0.0.3", 00:16:28.630 "trsvcid": "4420", 00:16:28.630 "trtype": "TCP" 00:16:28.630 }, 00:16:28.630 "peer_address": { 00:16:28.630 "adrfam": "IPv4", 00:16:28.630 "traddr": "10.0.0.1", 00:16:28.630 "trsvcid": "36210", 00:16:28.630 "trtype": "TCP" 00:16:28.630 }, 00:16:28.630 "qid": 0, 00:16:28.630 "state": "enabled", 00:16:28.630 "thread": "nvmf_tgt_poll_group_000" 00:16:28.630 } 00:16:28.630 ]' 00:16:28.630 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:28.889 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:28.889 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:28.889 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:28.889 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:28.889 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.889 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.889 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.457 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODhkNTRmMmE1ZmNkZmY0ZDExY2UyMjZlNjZlZDI4MzAwNzA5YmUxMmE5YWMyYWQzAlDRWQ==: --dhchap-ctrl-secret DHHC-1:01:ZmExZTRhMGYzNzhkMWQ2OTBkMjIyMzE3OWFiZjhkMTkzDxnU: 00:16:29.457 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid 806d442c-08ba-4719-b76d-33169283459a -l 0 --dhchap-secret DHHC-1:02:ODhkNTRmMmE1ZmNkZmY0ZDExY2UyMjZlNjZlZDI4MzAwNzA5YmUxMmE5YWMyYWQzAlDRWQ==: --dhchap-ctrl-secret DHHC-1:01:ZmExZTRhMGYzNzhkMWQ2OTBkMjIyMzE3OWFiZjhkMTkzDxnU: 00:16:30.023 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.023 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.023 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:16:30.023 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.023 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.023 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.023 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:30.023 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:30.023 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:30.589 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:16:30.589 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:30.589 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:30.590 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:30.590 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:30.590 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:30.590 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key3 00:16:30.590 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.590 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.590 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.590 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:30.590 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:30.590 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:31.157 00:16:31.157 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:31.157 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:31.157 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.444 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.444 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.444 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.444 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.444 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.444 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:31.444 { 00:16:31.444 "auth": { 00:16:31.444 "dhgroup": "ffdhe4096", 00:16:31.444 "digest": "sha512", 00:16:31.444 "state": "completed" 00:16:31.444 }, 00:16:31.444 "cntlid": 127, 00:16:31.444 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a", 00:16:31.444 "listen_address": { 00:16:31.444 "adrfam": "IPv4", 00:16:31.444 "traddr": "10.0.0.3", 00:16:31.444 "trsvcid": "4420", 00:16:31.444 "trtype": "TCP" 00:16:31.444 }, 00:16:31.444 "peer_address": { 00:16:31.444 "adrfam": "IPv4", 00:16:31.444 "traddr": "10.0.0.1", 00:16:31.444 "trsvcid": "36236", 00:16:31.444 "trtype": "TCP" 00:16:31.444 }, 00:16:31.444 "qid": 0, 00:16:31.444 "state": "enabled", 00:16:31.444 "thread": "nvmf_tgt_poll_group_000" 00:16:31.444 } 00:16:31.444 ]' 00:16:31.444 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:31.444 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:31.444 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:31.444 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:31.444 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:31.444 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.444 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.444 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.702 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODUxYTg2NGMwMGE3ZWM0ODJlYmZkMzVkNDkwOGFmZWViNzcwOTJlM2FlOTM1MmRjMzZjMWRlMTQwYzZjNTgwZnmdTmU=: 00:16:31.702 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid 806d442c-08ba-4719-b76d-33169283459a -l 0 --dhchap-secret DHHC-1:03:ODUxYTg2NGMwMGE3ZWM0ODJlYmZkMzVkNDkwOGFmZWViNzcwOTJlM2FlOTM1MmRjMzZjMWRlMTQwYzZjNTgwZnmdTmU=: 00:16:32.658 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.658 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.658 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:16:32.658 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.658 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.658 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.658 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:32.658 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:32.658 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:32.658 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:32.917 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:16:32.917 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:32.917 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:32.917 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:32.917 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:32.917 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.917 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.917 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.917 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.917 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.917 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.917 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.917 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.482 00:16:33.482 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:33.482 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:33.482 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.739 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.739 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.739 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.739 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.739 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.739 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:33.739 { 00:16:33.739 "auth": { 00:16:33.739 "dhgroup": "ffdhe6144", 00:16:33.739 "digest": "sha512", 00:16:33.739 "state": "completed" 00:16:33.739 }, 00:16:33.739 "cntlid": 129, 00:16:33.739 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a", 00:16:33.739 "listen_address": { 00:16:33.739 "adrfam": "IPv4", 00:16:33.739 "traddr": "10.0.0.3", 00:16:33.739 "trsvcid": "4420", 00:16:33.739 "trtype": "TCP" 00:16:33.739 }, 00:16:33.739 "peer_address": { 00:16:33.739 "adrfam": "IPv4", 00:16:33.739 "traddr": "10.0.0.1", 00:16:33.739 "trsvcid": "36252", 00:16:33.739 "trtype": "TCP" 00:16:33.739 }, 00:16:33.739 "qid": 0, 00:16:33.739 "state": "enabled", 00:16:33.739 "thread": "nvmf_tgt_poll_group_000" 00:16:33.739 } 00:16:33.739 ]' 00:16:33.739 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:33.740 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:33.740 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:33.740 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:33.740 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:33.997 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.997 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.997 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.256 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTU4ZTYwMjZmNWEyYzgzYjY3ZWU1MmE4ZjU0YzRmMjAwM2Q1ZGY4ZmVkZmVkZWY0w3W3ew==: --dhchap-ctrl-secret DHHC-1:03:OTgwNWJkMjE1MTUxNjMxNjRkZTlhZmI3YTJjMjIyN2RhYzM4MjEwYmYxZmI5NWFkZGQ5MTQ2M2U2NWUyYzQ3N6HE8a4=: 00:16:34.256 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid 806d442c-08ba-4719-b76d-33169283459a -l 0 --dhchap-secret DHHC-1:00:YTU4ZTYwMjZmNWEyYzgzYjY3ZWU1MmE4ZjU0YzRmMjAwM2Q1ZGY4ZmVkZmVkZWY0w3W3ew==: --dhchap-ctrl-secret DHHC-1:03:OTgwNWJkMjE1MTUxNjMxNjRkZTlhZmI3YTJjMjIyN2RhYzM4MjEwYmYxZmI5NWFkZGQ5MTQ2M2U2NWUyYzQ3N6HE8a4=: 00:16:35.191 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.191 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.191 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:16:35.191 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.191 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.192 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.192 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:35.192 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:35.192 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:35.449 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:16:35.449 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:35.449 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:35.449 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:35.449 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:35.449 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.450 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:35.450 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.450 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.450 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.450 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:35.450 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:35.450 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.020 00:16:36.020 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:36.020 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.020 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:36.289 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.289 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.289 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.289 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.289 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.289 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:36.289 { 00:16:36.289 "auth": { 00:16:36.289 "dhgroup": "ffdhe6144", 00:16:36.289 "digest": "sha512", 00:16:36.289 "state": "completed" 00:16:36.289 }, 00:16:36.289 "cntlid": 131, 00:16:36.289 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a", 00:16:36.289 "listen_address": { 00:16:36.289 "adrfam": "IPv4", 00:16:36.289 "traddr": "10.0.0.3", 00:16:36.289 "trsvcid": "4420", 00:16:36.289 "trtype": "TCP" 00:16:36.289 }, 00:16:36.289 "peer_address": { 00:16:36.289 "adrfam": "IPv4", 00:16:36.289 "traddr": "10.0.0.1", 00:16:36.289 "trsvcid": "53626", 00:16:36.289 "trtype": "TCP" 00:16:36.289 }, 00:16:36.289 "qid": 0, 00:16:36.289 "state": "enabled", 00:16:36.289 "thread": "nvmf_tgt_poll_group_000" 00:16:36.289 } 00:16:36.289 ]' 00:16:36.289 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:36.289 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:36.289 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:36.548 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:36.548 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:36.548 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.548 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.548 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.807 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTdlOWYyOTEzYTYxOThiMjE5ZmM2NThmOTY2YmJmZTg53tHC: --dhchap-ctrl-secret DHHC-1:02:ZThlN2MwNmMxNTdkMjhlYTU2YjU2MTY0ZmIwMThlYzFkZjAwNTU3YTdhMTQwNDU0RjLgqQ==: 00:16:36.807 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid 806d442c-08ba-4719-b76d-33169283459a -l 0 --dhchap-secret DHHC-1:01:OTdlOWYyOTEzYTYxOThiMjE5ZmM2NThmOTY2YmJmZTg53tHC: --dhchap-ctrl-secret DHHC-1:02:ZThlN2MwNmMxNTdkMjhlYTU2YjU2MTY0ZmIwMThlYzFkZjAwNTU3YTdhMTQwNDU0RjLgqQ==: 00:16:37.744 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.744 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.744 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:16:37.744 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.744 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.744 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.744 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:37.744 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:37.744 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:37.744 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:16:37.744 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:37.744 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:37.744 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:37.744 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:37.744 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.744 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:37.744 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.744 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.744 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.744 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:37.744 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:37.744 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.312 00:16:38.312 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:38.312 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.312 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:38.881 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.881 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.881 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.881 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.881 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.881 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:38.881 { 00:16:38.881 "auth": { 00:16:38.881 "dhgroup": "ffdhe6144", 00:16:38.881 "digest": "sha512", 00:16:38.881 "state": "completed" 00:16:38.881 }, 00:16:38.881 "cntlid": 133, 00:16:38.881 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a", 00:16:38.881 "listen_address": { 00:16:38.881 "adrfam": "IPv4", 00:16:38.881 "traddr": "10.0.0.3", 00:16:38.881 "trsvcid": "4420", 00:16:38.881 "trtype": "TCP" 00:16:38.881 }, 00:16:38.881 "peer_address": { 00:16:38.881 "adrfam": "IPv4", 00:16:38.881 "traddr": "10.0.0.1", 00:16:38.881 "trsvcid": "53644", 00:16:38.881 "trtype": "TCP" 00:16:38.881 }, 00:16:38.881 "qid": 0, 00:16:38.881 "state": "enabled", 00:16:38.881 "thread": "nvmf_tgt_poll_group_000" 00:16:38.881 } 00:16:38.881 ]' 00:16:38.881 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:38.881 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:38.881 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:38.881 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:38.881 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:38.881 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.881 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.881 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.139 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODhkNTRmMmE1ZmNkZmY0ZDExY2UyMjZlNjZlZDI4MzAwNzA5YmUxMmE5YWMyYWQzAlDRWQ==: --dhchap-ctrl-secret DHHC-1:01:ZmExZTRhMGYzNzhkMWQ2OTBkMjIyMzE3OWFiZjhkMTkzDxnU: 00:16:39.139 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid 806d442c-08ba-4719-b76d-33169283459a -l 0 --dhchap-secret DHHC-1:02:ODhkNTRmMmE1ZmNkZmY0ZDExY2UyMjZlNjZlZDI4MzAwNzA5YmUxMmE5YWMyYWQzAlDRWQ==: --dhchap-ctrl-secret DHHC-1:01:ZmExZTRhMGYzNzhkMWQ2OTBkMjIyMzE3OWFiZjhkMTkzDxnU: 00:16:40.075 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.076 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.076 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:16:40.076 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.076 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.076 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.076 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:40.076 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:40.076 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:40.334 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:16:40.334 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:40.334 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:40.334 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:40.334 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:40.334 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.334 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key3 00:16:40.334 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.334 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.334 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.334 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:40.334 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:40.334 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:40.900 00:16:40.900 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:40.900 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:40.900 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.158 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.158 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.158 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.158 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.158 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.158 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:41.158 { 00:16:41.158 "auth": { 00:16:41.158 "dhgroup": "ffdhe6144", 00:16:41.158 "digest": "sha512", 00:16:41.158 "state": "completed" 00:16:41.158 }, 00:16:41.158 "cntlid": 135, 00:16:41.158 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a", 00:16:41.158 "listen_address": { 00:16:41.158 "adrfam": "IPv4", 00:16:41.158 "traddr": "10.0.0.3", 00:16:41.158 "trsvcid": "4420", 00:16:41.158 "trtype": "TCP" 00:16:41.158 }, 00:16:41.158 "peer_address": { 00:16:41.158 "adrfam": "IPv4", 00:16:41.158 "traddr": "10.0.0.1", 00:16:41.158 "trsvcid": "53674", 00:16:41.158 "trtype": "TCP" 00:16:41.158 }, 00:16:41.158 "qid": 0, 00:16:41.158 "state": "enabled", 00:16:41.158 "thread": "nvmf_tgt_poll_group_000" 00:16:41.158 } 00:16:41.158 ]' 00:16:41.158 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:41.158 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:41.158 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:41.158 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:41.158 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:41.417 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.417 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.417 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.675 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODUxYTg2NGMwMGE3ZWM0ODJlYmZkMzVkNDkwOGFmZWViNzcwOTJlM2FlOTM1MmRjMzZjMWRlMTQwYzZjNTgwZnmdTmU=: 00:16:41.675 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid 806d442c-08ba-4719-b76d-33169283459a -l 0 --dhchap-secret DHHC-1:03:ODUxYTg2NGMwMGE3ZWM0ODJlYmZkMzVkNDkwOGFmZWViNzcwOTJlM2FlOTM1MmRjMzZjMWRlMTQwYzZjNTgwZnmdTmU=: 00:16:42.241 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.241 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.241 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:16:42.241 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.241 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.241 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.241 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:42.241 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:42.241 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:42.241 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:42.499 11:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:16:42.499 11:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:42.499 11:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:42.499 11:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:42.499 11:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:42.499 11:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.499 11:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.499 11:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.499 11:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.757 11:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.757 11:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.757 11:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.757 11:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.327 00:16:43.327 11:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:43.327 11:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:43.327 11:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.596 11:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.596 11:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.596 11:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.596 11:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.596 11:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.596 11:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:43.596 { 00:16:43.596 "auth": { 00:16:43.596 "dhgroup": "ffdhe8192", 00:16:43.596 "digest": "sha512", 00:16:43.596 "state": "completed" 00:16:43.596 }, 00:16:43.596 "cntlid": 137, 00:16:43.596 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a", 00:16:43.596 "listen_address": { 00:16:43.596 "adrfam": "IPv4", 00:16:43.596 "traddr": "10.0.0.3", 00:16:43.596 "trsvcid": "4420", 00:16:43.596 "trtype": "TCP" 00:16:43.596 }, 00:16:43.596 "peer_address": { 00:16:43.596 "adrfam": "IPv4", 00:16:43.596 "traddr": "10.0.0.1", 00:16:43.596 "trsvcid": "53688", 00:16:43.596 "trtype": "TCP" 00:16:43.596 }, 00:16:43.596 "qid": 0, 00:16:43.596 "state": "enabled", 00:16:43.596 "thread": "nvmf_tgt_poll_group_000" 00:16:43.596 } 00:16:43.596 ]' 00:16:43.596 11:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:43.596 11:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:43.597 11:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:43.597 11:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:43.597 11:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:43.858 11:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.858 11:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.858 11:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.116 11:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTU4ZTYwMjZmNWEyYzgzYjY3ZWU1MmE4ZjU0YzRmMjAwM2Q1ZGY4ZmVkZmVkZWY0w3W3ew==: --dhchap-ctrl-secret DHHC-1:03:OTgwNWJkMjE1MTUxNjMxNjRkZTlhZmI3YTJjMjIyN2RhYzM4MjEwYmYxZmI5NWFkZGQ5MTQ2M2U2NWUyYzQ3N6HE8a4=: 00:16:44.116 11:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid 806d442c-08ba-4719-b76d-33169283459a -l 0 --dhchap-secret DHHC-1:00:YTU4ZTYwMjZmNWEyYzgzYjY3ZWU1MmE4ZjU0YzRmMjAwM2Q1ZGY4ZmVkZmVkZWY0w3W3ew==: --dhchap-ctrl-secret DHHC-1:03:OTgwNWJkMjE1MTUxNjMxNjRkZTlhZmI3YTJjMjIyN2RhYzM4MjEwYmYxZmI5NWFkZGQ5MTQ2M2U2NWUyYzQ3N6HE8a4=: 00:16:45.050 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.050 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.050 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:16:45.050 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.050 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.050 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.050 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:45.050 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:45.050 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:45.309 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:16:45.309 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:45.309 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:45.309 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:45.309 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:45.309 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.309 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.309 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.309 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.309 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.309 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.309 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.309 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:46.242 00:16:46.242 11:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:46.242 11:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.242 11:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:46.501 11:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.501 11:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.501 11:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.501 11:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.501 11:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.502 11:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:46.502 { 00:16:46.502 "auth": { 00:16:46.502 "dhgroup": "ffdhe8192", 00:16:46.502 "digest": "sha512", 00:16:46.502 "state": "completed" 00:16:46.502 }, 00:16:46.502 "cntlid": 139, 00:16:46.502 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a", 00:16:46.502 "listen_address": { 00:16:46.502 "adrfam": "IPv4", 00:16:46.502 "traddr": "10.0.0.3", 00:16:46.502 "trsvcid": "4420", 00:16:46.502 "trtype": "TCP" 00:16:46.502 }, 00:16:46.502 "peer_address": { 00:16:46.502 "adrfam": "IPv4", 00:16:46.502 "traddr": "10.0.0.1", 00:16:46.502 "trsvcid": "41358", 00:16:46.502 "trtype": "TCP" 00:16:46.502 }, 00:16:46.502 "qid": 0, 00:16:46.502 "state": "enabled", 00:16:46.502 "thread": "nvmf_tgt_poll_group_000" 00:16:46.502 } 00:16:46.502 ]' 00:16:46.502 11:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:46.502 11:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:46.502 11:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:46.502 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:46.502 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:46.502 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.502 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.502 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.069 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTdlOWYyOTEzYTYxOThiMjE5ZmM2NThmOTY2YmJmZTg53tHC: --dhchap-ctrl-secret DHHC-1:02:ZThlN2MwNmMxNTdkMjhlYTU2YjU2MTY0ZmIwMThlYzFkZjAwNTU3YTdhMTQwNDU0RjLgqQ==: 00:16:47.069 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid 806d442c-08ba-4719-b76d-33169283459a -l 0 --dhchap-secret DHHC-1:01:OTdlOWYyOTEzYTYxOThiMjE5ZmM2NThmOTY2YmJmZTg53tHC: --dhchap-ctrl-secret DHHC-1:02:ZThlN2MwNmMxNTdkMjhlYTU2YjU2MTY0ZmIwMThlYzFkZjAwNTU3YTdhMTQwNDU0RjLgqQ==: 00:16:48.006 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.006 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.006 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:16:48.006 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.006 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.006 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.006 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:48.006 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:48.006 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:48.006 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:16:48.006 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:48.006 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:48.006 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:48.006 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:48.006 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.006 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:48.006 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.006 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.266 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.266 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:48.266 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:48.266 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:48.858 00:16:48.858 11:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:48.858 11:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.858 11:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:49.199 11:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.199 11:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.199 11:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.199 11:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.199 11:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.199 11:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:49.199 { 00:16:49.199 "auth": { 00:16:49.199 "dhgroup": "ffdhe8192", 00:16:49.199 "digest": "sha512", 00:16:49.199 "state": "completed" 00:16:49.199 }, 00:16:49.199 "cntlid": 141, 00:16:49.199 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a", 00:16:49.199 "listen_address": { 00:16:49.199 "adrfam": "IPv4", 00:16:49.199 "traddr": "10.0.0.3", 00:16:49.199 "trsvcid": "4420", 00:16:49.199 "trtype": "TCP" 00:16:49.199 }, 00:16:49.199 "peer_address": { 00:16:49.199 "adrfam": "IPv4", 00:16:49.199 "traddr": "10.0.0.1", 00:16:49.199 "trsvcid": "41390", 00:16:49.199 "trtype": "TCP" 00:16:49.199 }, 00:16:49.199 "qid": 0, 00:16:49.199 "state": "enabled", 00:16:49.199 "thread": "nvmf_tgt_poll_group_000" 00:16:49.199 } 00:16:49.199 ]' 00:16:49.199 11:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:49.199 11:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:49.199 11:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:49.199 11:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:49.199 11:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:49.199 11:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.199 11:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.199 11:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.768 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODhkNTRmMmE1ZmNkZmY0ZDExY2UyMjZlNjZlZDI4MzAwNzA5YmUxMmE5YWMyYWQzAlDRWQ==: --dhchap-ctrl-secret DHHC-1:01:ZmExZTRhMGYzNzhkMWQ2OTBkMjIyMzE3OWFiZjhkMTkzDxnU: 00:16:49.768 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid 806d442c-08ba-4719-b76d-33169283459a -l 0 --dhchap-secret DHHC-1:02:ODhkNTRmMmE1ZmNkZmY0ZDExY2UyMjZlNjZlZDI4MzAwNzA5YmUxMmE5YWMyYWQzAlDRWQ==: --dhchap-ctrl-secret DHHC-1:01:ZmExZTRhMGYzNzhkMWQ2OTBkMjIyMzE3OWFiZjhkMTkzDxnU: 00:16:50.337 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.337 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.337 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:16:50.337 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.337 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.337 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.337 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:50.337 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:50.337 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:50.596 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:16:50.596 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:50.596 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:50.596 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:50.596 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:50.596 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.596 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key3 00:16:50.596 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.596 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.596 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.596 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:50.596 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:50.596 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:51.163 00:16:51.163 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:51.163 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.163 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:51.730 11:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.730 11:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.730 11:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.730 11:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.730 11:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.730 11:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:51.730 { 00:16:51.730 "auth": { 00:16:51.730 "dhgroup": "ffdhe8192", 00:16:51.730 "digest": "sha512", 00:16:51.730 "state": "completed" 00:16:51.730 }, 00:16:51.730 "cntlid": 143, 00:16:51.730 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a", 00:16:51.730 "listen_address": { 00:16:51.730 "adrfam": "IPv4", 00:16:51.730 "traddr": "10.0.0.3", 00:16:51.730 "trsvcid": "4420", 00:16:51.730 "trtype": "TCP" 00:16:51.730 }, 00:16:51.730 "peer_address": { 00:16:51.730 "adrfam": "IPv4", 00:16:51.730 "traddr": "10.0.0.1", 00:16:51.730 "trsvcid": "41418", 00:16:51.730 "trtype": "TCP" 00:16:51.730 }, 00:16:51.730 "qid": 0, 00:16:51.730 "state": "enabled", 00:16:51.730 "thread": "nvmf_tgt_poll_group_000" 00:16:51.730 } 00:16:51.730 ]' 00:16:51.730 11:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:51.730 11:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:51.730 11:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:51.730 11:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:51.730 11:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:51.988 11:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.988 11:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.988 11:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.246 11:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODUxYTg2NGMwMGE3ZWM0ODJlYmZkMzVkNDkwOGFmZWViNzcwOTJlM2FlOTM1MmRjMzZjMWRlMTQwYzZjNTgwZnmdTmU=: 00:16:52.246 11:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid 806d442c-08ba-4719-b76d-33169283459a -l 0 --dhchap-secret DHHC-1:03:ODUxYTg2NGMwMGE3ZWM0ODJlYmZkMzVkNDkwOGFmZWViNzcwOTJlM2FlOTM1MmRjMzZjMWRlMTQwYzZjNTgwZnmdTmU=: 00:16:53.181 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.181 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.181 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:16:53.181 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.181 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.181 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.181 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:16:53.181 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:16:53.181 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:16:53.181 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:53.181 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:53.181 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:53.439 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:16:53.439 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:53.439 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:53.439 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:53.439 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:53.439 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.439 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.439 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.439 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.439 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.439 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.439 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.439 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.373 00:16:54.373 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:54.373 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:54.373 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.939 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.939 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.939 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.939 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.939 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.939 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:54.939 { 00:16:54.939 "auth": { 00:16:54.939 "dhgroup": "ffdhe8192", 00:16:54.939 "digest": "sha512", 00:16:54.939 "state": "completed" 00:16:54.939 }, 00:16:54.939 "cntlid": 145, 00:16:54.939 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a", 00:16:54.939 "listen_address": { 00:16:54.939 "adrfam": "IPv4", 00:16:54.939 "traddr": "10.0.0.3", 00:16:54.939 "trsvcid": "4420", 00:16:54.939 "trtype": "TCP" 00:16:54.939 }, 00:16:54.939 "peer_address": { 00:16:54.939 "adrfam": "IPv4", 00:16:54.939 "traddr": "10.0.0.1", 00:16:54.939 "trsvcid": "41454", 00:16:54.939 "trtype": "TCP" 00:16:54.939 }, 00:16:54.939 "qid": 0, 00:16:54.939 "state": "enabled", 00:16:54.939 "thread": "nvmf_tgt_poll_group_000" 00:16:54.939 } 00:16:54.939 ]' 00:16:54.939 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:54.939 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:54.939 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:54.939 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:54.939 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:55.198 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.198 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.198 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.456 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTU4ZTYwMjZmNWEyYzgzYjY3ZWU1MmE4ZjU0YzRmMjAwM2Q1ZGY4ZmVkZmVkZWY0w3W3ew==: --dhchap-ctrl-secret DHHC-1:03:OTgwNWJkMjE1MTUxNjMxNjRkZTlhZmI3YTJjMjIyN2RhYzM4MjEwYmYxZmI5NWFkZGQ5MTQ2M2U2NWUyYzQ3N6HE8a4=: 00:16:55.456 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid 806d442c-08ba-4719-b76d-33169283459a -l 0 --dhchap-secret DHHC-1:00:YTU4ZTYwMjZmNWEyYzgzYjY3ZWU1MmE4ZjU0YzRmMjAwM2Q1ZGY4ZmVkZmVkZWY0w3W3ew==: --dhchap-ctrl-secret DHHC-1:03:OTgwNWJkMjE1MTUxNjMxNjRkZTlhZmI3YTJjMjIyN2RhYzM4MjEwYmYxZmI5NWFkZGQ5MTQ2M2U2NWUyYzQ3N6HE8a4=: 00:16:56.391 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.391 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.391 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:16:56.391 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.391 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.391 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.391 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key1 00:16:56.391 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.391 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.391 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.391 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:16:56.391 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:56.391 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:16:56.391 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:56.391 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:56.391 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:56.391 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:56.391 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:16:56.391 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:16:56.391 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:16:56.957 2024/11/20 11:29:42 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:16:56.957 request: 00:16:56.957 { 00:16:56.957 "method": "bdev_nvme_attach_controller", 00:16:56.957 "params": { 00:16:56.957 "name": "nvme0", 00:16:56.957 "trtype": "tcp", 00:16:56.957 "traddr": "10.0.0.3", 00:16:56.957 "adrfam": "ipv4", 00:16:56.957 "trsvcid": "4420", 00:16:56.957 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:56.957 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a", 00:16:56.957 "prchk_reftag": false, 00:16:56.957 "prchk_guard": false, 00:16:56.957 "hdgst": false, 00:16:56.957 "ddgst": false, 00:16:56.957 "dhchap_key": "key2", 00:16:56.957 "allow_unrecognized_csi": false 00:16:56.957 } 00:16:56.957 } 00:16:56.957 Got JSON-RPC error response 00:16:56.957 GoRPCClient: error on JSON-RPC call 00:16:56.957 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:56.957 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:56.957 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:56.957 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:56.957 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:16:56.957 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.957 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.957 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.957 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.957 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.957 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.957 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.957 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:56.957 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:56.957 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:56.957 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:56.957 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:56.957 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:56.957 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:56.957 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:56.957 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:56.957 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:57.524 2024/11/20 11:29:43 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:16:57.524 request: 00:16:57.524 { 00:16:57.524 "method": "bdev_nvme_attach_controller", 00:16:57.524 "params": { 00:16:57.524 "name": "nvme0", 00:16:57.524 "trtype": "tcp", 00:16:57.524 "traddr": "10.0.0.3", 00:16:57.524 "adrfam": "ipv4", 00:16:57.524 "trsvcid": "4420", 00:16:57.524 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:57.524 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a", 00:16:57.524 "prchk_reftag": false, 00:16:57.524 "prchk_guard": false, 00:16:57.524 "hdgst": false, 00:16:57.524 "ddgst": false, 00:16:57.524 "dhchap_key": "key1", 00:16:57.524 "dhchap_ctrlr_key": "ckey2", 00:16:57.524 "allow_unrecognized_csi": false 00:16:57.524 } 00:16:57.524 } 00:16:57.524 Got JSON-RPC error response 00:16:57.524 GoRPCClient: error on JSON-RPC call 00:16:57.782 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:57.782 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:57.782 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:57.782 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:57.782 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:16:57.782 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.782 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.782 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.782 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key1 00:16:57.782 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.782 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.782 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.782 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.782 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:57.782 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.782 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:57.782 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:57.783 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:57.783 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:57.783 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.783 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.783 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.348 2024/11/20 11:29:43 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey1 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:16:58.348 request: 00:16:58.348 { 00:16:58.348 "method": "bdev_nvme_attach_controller", 00:16:58.348 "params": { 00:16:58.348 "name": "nvme0", 00:16:58.348 "trtype": "tcp", 00:16:58.348 "traddr": "10.0.0.3", 00:16:58.348 "adrfam": "ipv4", 00:16:58.348 "trsvcid": "4420", 00:16:58.348 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:58.348 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a", 00:16:58.348 "prchk_reftag": false, 00:16:58.348 "prchk_guard": false, 00:16:58.348 "hdgst": false, 00:16:58.348 "ddgst": false, 00:16:58.348 "dhchap_key": "key1", 00:16:58.348 "dhchap_ctrlr_key": "ckey1", 00:16:58.348 "allow_unrecognized_csi": false 00:16:58.348 } 00:16:58.348 } 00:16:58.348 Got JSON-RPC error response 00:16:58.348 GoRPCClient: error on JSON-RPC call 00:16:58.607 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:58.607 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:58.607 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:58.607 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:58.607 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:16:58.607 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.607 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.607 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.607 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 76158 00:16:58.607 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 76158 ']' 00:16:58.607 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 76158 00:16:58.607 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:16:58.607 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:58.607 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76158 00:16:58.607 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:58.607 killing process with pid 76158 00:16:58.607 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:58.607 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76158' 00:16:58.607 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 76158 00:16:58.607 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 76158 00:16:58.607 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:16:58.607 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:58.607 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:58.607 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.607 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=81273 00:16:58.607 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:16:58.607 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 81273 00:16:58.607 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 81273 ']' 00:16:58.607 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:58.607 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:58.607 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:58.607 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:58.607 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.866 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:58.866 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:58.866 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:58.866 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:58.866 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.125 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:59.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:59.125 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:59.125 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 81273 00:16:59.125 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 81273 ']' 00:16:59.125 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:59.125 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:59.125 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:59.125 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:59.125 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.385 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:59.385 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:59.385 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:16:59.385 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.385 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.385 null0 00:16:59.385 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.385 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:59.385 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.jcw 00:16:59.385 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.385 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.385 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.385 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.2fz ]] 00:16:59.385 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.2fz 00:16:59.385 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.385 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.385 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.385 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:59.385 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.zsS 00:16:59.385 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.385 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.385 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.385 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.Ggp ]] 00:16:59.385 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Ggp 00:16:59.385 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.385 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.385 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.385 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:59.385 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Zf4 00:16:59.385 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.385 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.643 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.643 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.Vdl ]] 00:16:59.644 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Vdl 00:16:59.644 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.644 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.644 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.644 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:59.644 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.SGZ 00:16:59.644 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.644 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.644 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.644 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:16:59.644 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:16:59.644 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:59.644 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:59.644 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:59.644 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:59.644 11:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.644 11:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key3 00:16:59.644 11:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.644 11:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.644 11:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.644 11:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:59.644 11:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:59.644 11:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:00.577 nvme0n1 00:17:00.577 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:00.577 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:00.577 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.835 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.835 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.835 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.835 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.835 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.835 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:00.835 { 00:17:00.835 "auth": { 00:17:00.835 "dhgroup": "ffdhe8192", 00:17:00.835 "digest": "sha512", 00:17:00.835 "state": "completed" 00:17:00.835 }, 00:17:00.835 "cntlid": 1, 00:17:00.835 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a", 00:17:00.835 "listen_address": { 00:17:00.835 "adrfam": "IPv4", 00:17:00.835 "traddr": "10.0.0.3", 00:17:00.835 "trsvcid": "4420", 00:17:00.835 "trtype": "TCP" 00:17:00.835 }, 00:17:00.835 "peer_address": { 00:17:00.835 "adrfam": "IPv4", 00:17:00.835 "traddr": "10.0.0.1", 00:17:00.835 "trsvcid": "51144", 00:17:00.835 "trtype": "TCP" 00:17:00.835 }, 00:17:00.835 "qid": 0, 00:17:00.835 "state": "enabled", 00:17:00.835 "thread": "nvmf_tgt_poll_group_000" 00:17:00.835 } 00:17:00.835 ]' 00:17:00.835 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:00.835 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:00.835 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:00.835 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:00.835 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:01.093 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.093 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.093 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.352 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODUxYTg2NGMwMGE3ZWM0ODJlYmZkMzVkNDkwOGFmZWViNzcwOTJlM2FlOTM1MmRjMzZjMWRlMTQwYzZjNTgwZnmdTmU=: 00:17:01.352 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid 806d442c-08ba-4719-b76d-33169283459a -l 0 --dhchap-secret DHHC-1:03:ODUxYTg2NGMwMGE3ZWM0ODJlYmZkMzVkNDkwOGFmZWViNzcwOTJlM2FlOTM1MmRjMzZjMWRlMTQwYzZjNTgwZnmdTmU=: 00:17:01.921 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.179 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.179 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:17:02.179 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.179 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.179 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.179 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key3 00:17:02.179 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.179 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.179 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.179 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:17:02.179 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:17:02.438 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:02.438 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:02.438 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:02.438 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:02.438 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:02.438 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:02.438 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:02.438 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:02.438 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:02.438 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:02.697 2024/11/20 11:29:48 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:17:02.697 request: 00:17:02.697 { 00:17:02.697 "method": "bdev_nvme_attach_controller", 00:17:02.697 "params": { 00:17:02.697 "name": "nvme0", 00:17:02.697 "trtype": "tcp", 00:17:02.697 "traddr": "10.0.0.3", 00:17:02.697 "adrfam": "ipv4", 00:17:02.697 "trsvcid": "4420", 00:17:02.697 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:02.697 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a", 00:17:02.697 "prchk_reftag": false, 00:17:02.697 "prchk_guard": false, 00:17:02.697 "hdgst": false, 00:17:02.697 "ddgst": false, 00:17:02.697 "dhchap_key": "key3", 00:17:02.697 "allow_unrecognized_csi": false 00:17:02.697 } 00:17:02.697 } 00:17:02.697 Got JSON-RPC error response 00:17:02.697 GoRPCClient: error on JSON-RPC call 00:17:02.697 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:02.697 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:02.697 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:02.697 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:02.698 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:17:02.698 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:17:02.698 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:02.698 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:03.266 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:03.266 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:03.266 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:03.266 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:03.266 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:03.266 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:03.266 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:03.266 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:03.266 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:03.266 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:03.525 2024/11/20 11:29:48 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:17:03.525 request: 00:17:03.525 { 00:17:03.525 "method": "bdev_nvme_attach_controller", 00:17:03.525 "params": { 00:17:03.525 "name": "nvme0", 00:17:03.525 "trtype": "tcp", 00:17:03.525 "traddr": "10.0.0.3", 00:17:03.525 "adrfam": "ipv4", 00:17:03.525 "trsvcid": "4420", 00:17:03.525 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:03.525 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a", 00:17:03.525 "prchk_reftag": false, 00:17:03.525 "prchk_guard": false, 00:17:03.525 "hdgst": false, 00:17:03.525 "ddgst": false, 00:17:03.525 "dhchap_key": "key3", 00:17:03.525 "allow_unrecognized_csi": false 00:17:03.525 } 00:17:03.525 } 00:17:03.525 Got JSON-RPC error response 00:17:03.525 GoRPCClient: error on JSON-RPC call 00:17:03.525 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:03.525 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:03.525 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:03.525 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:03.525 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:03.525 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:17:03.525 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:03.525 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:03.525 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:03.525 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:03.784 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:17:03.784 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.784 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.784 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.784 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:17:03.784 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.784 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.784 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.784 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:03.784 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:03.784 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:03.784 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:03.784 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:03.784 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:03.784 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:03.784 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:03.784 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:03.784 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:04.349 2024/11/20 11:29:49 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:key1 dhchap_key:key0 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:17:04.349 request: 00:17:04.349 { 00:17:04.349 "method": "bdev_nvme_attach_controller", 00:17:04.349 "params": { 00:17:04.349 "name": "nvme0", 00:17:04.349 "trtype": "tcp", 00:17:04.349 "traddr": "10.0.0.3", 00:17:04.349 "adrfam": "ipv4", 00:17:04.349 "trsvcid": "4420", 00:17:04.349 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:04.349 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a", 00:17:04.349 "prchk_reftag": false, 00:17:04.349 "prchk_guard": false, 00:17:04.349 "hdgst": false, 00:17:04.349 "ddgst": false, 00:17:04.349 "dhchap_key": "key0", 00:17:04.349 "dhchap_ctrlr_key": "key1", 00:17:04.349 "allow_unrecognized_csi": false 00:17:04.349 } 00:17:04.349 } 00:17:04.349 Got JSON-RPC error response 00:17:04.349 GoRPCClient: error on JSON-RPC call 00:17:04.350 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:04.350 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:04.350 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:04.350 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:04.350 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:17:04.350 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:04.350 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:04.915 nvme0n1 00:17:04.915 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:17:04.915 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.915 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:17:05.173 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.173 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.173 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.739 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key1 00:17:05.739 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.739 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.739 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.739 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:05.739 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:05.739 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:06.674 nvme0n1 00:17:06.674 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:17:06.674 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:17:06.674 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.240 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.240 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:07.240 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.240 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.240 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.240 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:17:07.240 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:17:07.240 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.498 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.498 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:ODhkNTRmMmE1ZmNkZmY0ZDExY2UyMjZlNjZlZDI4MzAwNzA5YmUxMmE5YWMyYWQzAlDRWQ==: --dhchap-ctrl-secret DHHC-1:03:ODUxYTg2NGMwMGE3ZWM0ODJlYmZkMzVkNDkwOGFmZWViNzcwOTJlM2FlOTM1MmRjMzZjMWRlMTQwYzZjNTgwZnmdTmU=: 00:17:07.498 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid 806d442c-08ba-4719-b76d-33169283459a -l 0 --dhchap-secret DHHC-1:02:ODhkNTRmMmE1ZmNkZmY0ZDExY2UyMjZlNjZlZDI4MzAwNzA5YmUxMmE5YWMyYWQzAlDRWQ==: --dhchap-ctrl-secret DHHC-1:03:ODUxYTg2NGMwMGE3ZWM0ODJlYmZkMzVkNDkwOGFmZWViNzcwOTJlM2FlOTM1MmRjMzZjMWRlMTQwYzZjNTgwZnmdTmU=: 00:17:08.432 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:17:08.432 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:17:08.432 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:17:08.432 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:17:08.432 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:17:08.432 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:17:08.432 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:17:08.432 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.432 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.690 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:17:08.690 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:08.690 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:17:08.690 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:08.690 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:08.690 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:08.690 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:08.690 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:08.690 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:08.690 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:09.257 2024/11/20 11:29:54 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:17:09.257 request: 00:17:09.257 { 00:17:09.257 "method": "bdev_nvme_attach_controller", 00:17:09.257 "params": { 00:17:09.257 "name": "nvme0", 00:17:09.257 "trtype": "tcp", 00:17:09.257 "traddr": "10.0.0.3", 00:17:09.257 "adrfam": "ipv4", 00:17:09.257 "trsvcid": "4420", 00:17:09.257 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:09.257 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a", 00:17:09.257 "prchk_reftag": false, 00:17:09.257 "prchk_guard": false, 00:17:09.257 "hdgst": false, 00:17:09.257 "ddgst": false, 00:17:09.257 "dhchap_key": "key1", 00:17:09.257 "allow_unrecognized_csi": false 00:17:09.257 } 00:17:09.257 } 00:17:09.257 Got JSON-RPC error response 00:17:09.257 GoRPCClient: error on JSON-RPC call 00:17:09.257 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:09.257 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:09.257 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:09.257 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:09.257 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:09.257 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:09.257 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:10.631 nvme0n1 00:17:10.631 11:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:17:10.631 11:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:17:10.631 11:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.631 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.631 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.631 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.197 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:17:11.197 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.197 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.197 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.197 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:17:11.197 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:11.197 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:11.455 nvme0n1 00:17:11.455 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:17:11.455 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.455 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:17:12.022 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.022 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.022 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.281 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:12.281 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.281 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.539 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.539 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:OTdlOWYyOTEzYTYxOThiMjE5ZmM2NThmOTY2YmJmZTg53tHC: '' 2s 00:17:12.539 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:12.539 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:12.539 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:OTdlOWYyOTEzYTYxOThiMjE5ZmM2NThmOTY2YmJmZTg53tHC: 00:17:12.539 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:17:12.539 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:12.539 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:12.539 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:OTdlOWYyOTEzYTYxOThiMjE5ZmM2NThmOTY2YmJmZTg53tHC: ]] 00:17:12.539 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:OTdlOWYyOTEzYTYxOThiMjE5ZmM2NThmOTY2YmJmZTg53tHC: 00:17:12.539 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:17:12.539 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:12.539 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:14.450 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:17:14.450 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:17:14.450 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:14.450 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:14.450 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:14.450 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:14.450 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:17:14.450 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key1 --dhchap-ctrlr-key key2 00:17:14.450 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.450 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.450 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.450 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:ODhkNTRmMmE1ZmNkZmY0ZDExY2UyMjZlNjZlZDI4MzAwNzA5YmUxMmE5YWMyYWQzAlDRWQ==: 2s 00:17:14.450 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:14.450 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:14.450 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:17:14.450 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:ODhkNTRmMmE1ZmNkZmY0ZDExY2UyMjZlNjZlZDI4MzAwNzA5YmUxMmE5YWMyYWQzAlDRWQ==: 00:17:14.450 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:14.450 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:14.450 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:17:14.450 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:ODhkNTRmMmE1ZmNkZmY0ZDExY2UyMjZlNjZlZDI4MzAwNzA5YmUxMmE5YWMyYWQzAlDRWQ==: ]] 00:17:14.450 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:ODhkNTRmMmE1ZmNkZmY0ZDExY2UyMjZlNjZlZDI4MzAwNzA5YmUxMmE5YWMyYWQzAlDRWQ==: 00:17:14.450 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:14.450 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:16.349 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:17:16.349 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:17:16.349 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:16.349 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:16.349 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:16.349 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:16.608 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:17:16.608 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.608 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.608 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:16.608 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.608 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.608 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.608 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:16.608 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:16.608 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:17.983 nvme0n1 00:17:17.983 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:17.983 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.983 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.983 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.983 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:17.983 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:18.550 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:17:18.550 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:17:18.550 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.116 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.116 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:17:19.116 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.116 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.117 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.117 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:17:19.117 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:17:19.375 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:17:19.375 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:17:19.375 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.942 11:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.942 11:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:19.942 11:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.942 11:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.942 11:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.942 11:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:19.942 11:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:19.942 11:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:19.942 11:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:17:19.942 11:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:19.942 11:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:17:19.942 11:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:19.942 11:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:19.942 11:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:20.506 2024/11/20 11:30:06 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:key3 dhchap_key:key1 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 00:17:20.506 request: 00:17:20.506 { 00:17:20.506 "method": "bdev_nvme_set_keys", 00:17:20.506 "params": { 00:17:20.506 "name": "nvme0", 00:17:20.506 "dhchap_key": "key1", 00:17:20.506 "dhchap_ctrlr_key": "key3" 00:17:20.506 } 00:17:20.506 } 00:17:20.506 Got JSON-RPC error response 00:17:20.506 GoRPCClient: error on JSON-RPC call 00:17:20.506 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:20.506 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:20.507 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:20.507 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:20.507 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:20.507 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.507 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:21.073 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:17:21.073 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:17:22.009 11:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:22.009 11:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:22.009 11:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.267 11:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:17:22.267 11:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:22.267 11:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.267 11:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.267 11:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.267 11:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:22.267 11:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:22.267 11:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:23.642 nvme0n1 00:17:23.642 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:23.642 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.642 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.642 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.642 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:23.642 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:23.642 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:23.642 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:17:23.642 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:23.642 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:17:23.642 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:23.642 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:23.642 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:24.207 2024/11/20 11:30:09 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:key0 dhchap_key:key2 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 00:17:24.207 request: 00:17:24.207 { 00:17:24.207 "method": "bdev_nvme_set_keys", 00:17:24.207 "params": { 00:17:24.207 "name": "nvme0", 00:17:24.207 "dhchap_key": "key2", 00:17:24.207 "dhchap_ctrlr_key": "key0" 00:17:24.207 } 00:17:24.207 } 00:17:24.207 Got JSON-RPC error response 00:17:24.207 GoRPCClient: error on JSON-RPC call 00:17:24.207 11:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:24.207 11:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:24.207 11:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:24.207 11:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:24.207 11:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:24.207 11:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:24.207 11:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.773 11:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:17:24.773 11:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:17:25.708 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:25.708 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:25.708 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.966 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:17:25.966 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:17:25.966 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:17:25.966 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 76184 00:17:25.966 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 76184 ']' 00:17:25.966 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 76184 00:17:25.966 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:25.966 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:25.966 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76184 00:17:26.224 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:26.224 killing process with pid 76184 00:17:26.224 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:26.224 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76184' 00:17:26.224 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 76184 00:17:26.224 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 76184 00:17:26.483 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:17:26.483 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:26.483 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:17:26.483 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:26.483 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:17:26.483 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:26.483 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:26.483 rmmod nvme_tcp 00:17:26.483 rmmod nvme_fabrics 00:17:26.483 rmmod nvme_keyring 00:17:26.483 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:26.483 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:17:26.483 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:17:26.483 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 81273 ']' 00:17:26.483 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 81273 00:17:26.483 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 81273 ']' 00:17:26.483 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 81273 00:17:26.483 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:26.483 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:26.483 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81273 00:17:26.483 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:26.483 killing process with pid 81273 00:17:26.483 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:26.483 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81273' 00:17:26.483 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 81273 00:17:26.483 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 81273 00:17:26.742 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:26.742 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:26.742 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:26.742 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:17:26.742 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:17:26.742 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:17:26.742 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:26.742 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:26.742 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:26.742 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:26.742 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:26.742 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:26.742 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:26.742 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:26.742 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:26.742 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:26.742 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:26.742 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:26.742 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:26.742 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:26.742 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:26.742 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:26.742 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:26.742 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:26.742 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:26.742 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:27.001 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:17:27.001 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.jcw /tmp/spdk.key-sha256.zsS /tmp/spdk.key-sha384.Zf4 /tmp/spdk.key-sha512.SGZ /tmp/spdk.key-sha512.2fz /tmp/spdk.key-sha384.Ggp /tmp/spdk.key-sha256.Vdl '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:17:27.001 00:17:27.001 real 3m36.440s 00:17:27.001 user 8m50.866s 00:17:27.001 sys 0m24.526s 00:17:27.001 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:27.001 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.001 ************************************ 00:17:27.001 END TEST nvmf_auth_target 00:17:27.001 ************************************ 00:17:27.001 11:30:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:17:27.001 11:30:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:27.001 11:30:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:27.001 11:30:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:27.001 11:30:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:27.001 ************************************ 00:17:27.001 START TEST nvmf_bdevio_no_huge 00:17:27.001 ************************************ 00:17:27.001 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:27.001 * Looking for test storage... 00:17:27.001 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:27.001 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:27.001 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:17:27.001 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:27.001 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:27.001 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:27.001 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:27.001 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:27.001 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:17:27.001 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:17:27.001 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:17:27.001 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:17:27.002 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:17:27.002 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:17:27.002 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:17:27.002 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:27.002 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:17:27.002 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:17:27.002 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:27.002 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:27.002 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:17:27.002 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:17:27.002 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:27.002 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:17:27.002 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:17:27.002 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:17:27.002 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:17:27.002 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:27.002 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:17:27.002 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:17:27.002 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:27.002 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:27.002 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:17:27.002 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:27.002 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:27.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.002 --rc genhtml_branch_coverage=1 00:17:27.002 --rc genhtml_function_coverage=1 00:17:27.002 --rc genhtml_legend=1 00:17:27.002 --rc geninfo_all_blocks=1 00:17:27.002 --rc geninfo_unexecuted_blocks=1 00:17:27.002 00:17:27.002 ' 00:17:27.002 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:27.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.002 --rc genhtml_branch_coverage=1 00:17:27.002 --rc genhtml_function_coverage=1 00:17:27.002 --rc genhtml_legend=1 00:17:27.002 --rc geninfo_all_blocks=1 00:17:27.002 --rc geninfo_unexecuted_blocks=1 00:17:27.002 00:17:27.002 ' 00:17:27.002 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:27.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.002 --rc genhtml_branch_coverage=1 00:17:27.002 --rc genhtml_function_coverage=1 00:17:27.002 --rc genhtml_legend=1 00:17:27.002 --rc geninfo_all_blocks=1 00:17:27.002 --rc geninfo_unexecuted_blocks=1 00:17:27.002 00:17:27.002 ' 00:17:27.002 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:27.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.002 --rc genhtml_branch_coverage=1 00:17:27.002 --rc genhtml_function_coverage=1 00:17:27.002 --rc genhtml_legend=1 00:17:27.002 --rc geninfo_all_blocks=1 00:17:27.002 --rc geninfo_unexecuted_blocks=1 00:17:27.002 00:17:27.002 ' 00:17:27.002 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:27.002 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:17:27.002 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:27.002 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:27.002 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:27.002 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:27.002 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:27.002 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:27.002 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:27.002 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:27.002 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:27.261 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:27.261 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:17:27.261 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=806d442c-08ba-4719-b76d-33169283459a 00:17:27.261 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:27.261 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:27.261 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:27.261 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:27.261 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:27.261 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:17:27.261 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:27.261 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:27.262 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:27.262 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.262 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.262 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.262 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:17:27.262 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.262 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:17:27.262 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:27.262 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:27.262 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:27.262 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:27.262 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:27.262 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:27.262 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:27.262 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:27.262 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:27.262 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:27.262 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:27.262 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:27.262 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:17:27.262 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:27.262 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:27.262 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:27.262 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:27.262 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:27.262 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:27.262 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:27.262 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:27.262 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:27.262 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:27.262 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:27.262 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:27.262 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:27.262 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:27.262 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:27.262 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:27.262 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:27.262 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:27.262 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:27.262 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:27.262 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:27.262 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:27.262 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:27.262 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:27.262 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:27.262 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:27.262 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:27.262 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:27.262 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:27.262 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:27.262 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:27.262 Cannot find device "nvmf_init_br" 00:17:27.262 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:17:27.262 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:27.262 Cannot find device "nvmf_init_br2" 00:17:27.262 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:17:27.262 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:27.262 Cannot find device "nvmf_tgt_br" 00:17:27.262 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:17:27.262 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:27.262 Cannot find device "nvmf_tgt_br2" 00:17:27.262 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:17:27.262 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:27.262 Cannot find device "nvmf_init_br" 00:17:27.262 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:17:27.262 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:27.262 Cannot find device "nvmf_init_br2" 00:17:27.262 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:17:27.262 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:27.262 Cannot find device "nvmf_tgt_br" 00:17:27.262 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:17:27.262 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:27.262 Cannot find device "nvmf_tgt_br2" 00:17:27.262 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:17:27.262 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:27.262 Cannot find device "nvmf_br" 00:17:27.262 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:17:27.262 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:27.262 Cannot find device "nvmf_init_if" 00:17:27.262 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:17:27.262 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:27.262 Cannot find device "nvmf_init_if2" 00:17:27.262 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:17:27.263 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:27.263 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:27.263 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:17:27.263 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:27.263 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:27.263 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:17:27.263 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:27.263 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:27.263 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:27.263 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:27.263 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:27.263 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:27.263 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:27.521 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:27.521 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:27.521 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:27.521 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:27.521 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:27.521 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:27.521 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:27.521 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:27.521 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:27.521 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:27.521 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:27.521 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:27.521 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:27.521 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:27.521 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:27.521 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:27.521 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:27.521 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:27.521 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:27.521 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:27.521 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:27.521 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:27.521 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:27.521 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:27.521 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:27.521 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:27.521 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:27.522 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:17:27.522 00:17:27.522 --- 10.0.0.3 ping statistics --- 00:17:27.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:27.522 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:17:27.522 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:27.522 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:27.522 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:17:27.522 00:17:27.522 --- 10.0.0.4 ping statistics --- 00:17:27.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:27.522 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:17:27.522 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:27.522 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:27.522 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:17:27.522 00:17:27.522 --- 10.0.0.1 ping statistics --- 00:17:27.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:27.522 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:17:27.522 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:27.522 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:27.522 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:17:27.522 00:17:27.522 --- 10.0.0.2 ping statistics --- 00:17:27.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:27.522 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:17:27.522 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:27.522 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@461 -- # return 0 00:17:27.522 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:27.522 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:27.522 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:27.522 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:27.522 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:27.522 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:27.522 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:27.522 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:27.522 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:27.522 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:27.522 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:27.522 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=82157 00:17:27.522 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 82157 00:17:27.522 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:27.522 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 82157 ']' 00:17:27.522 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:27.522 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:27.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:27.522 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:27.522 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:27.522 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:27.522 [2024-11-20 11:30:13.064207] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:17:27.522 [2024-11-20 11:30:13.064805] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:27.781 [2024-11-20 11:30:13.221180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:27.781 [2024-11-20 11:30:13.284854] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:27.781 [2024-11-20 11:30:13.284911] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:27.781 [2024-11-20 11:30:13.284922] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:27.781 [2024-11-20 11:30:13.284930] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:27.781 [2024-11-20 11:30:13.284938] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:27.781 [2024-11-20 11:30:13.285465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:17:27.781 [2024-11-20 11:30:13.285591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:17:27.781 [2024-11-20 11:30:13.285704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:27.781 [2024-11-20 11:30:13.285704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:17:28.041 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:28.041 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:17:28.041 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:28.041 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:28.041 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:28.041 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:28.041 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:28.041 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.041 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:28.041 [2024-11-20 11:30:13.466578] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:28.041 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.041 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:28.041 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.041 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:28.041 Malloc0 00:17:28.041 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.041 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:28.041 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.041 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:28.041 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.041 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:28.041 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.041 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:28.041 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.041 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:28.041 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.041 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:28.041 [2024-11-20 11:30:13.515662] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:28.041 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.041 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:28.041 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:28.041 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:17:28.041 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:17:28.041 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:28.041 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:28.041 { 00:17:28.041 "params": { 00:17:28.041 "name": "Nvme$subsystem", 00:17:28.041 "trtype": "$TEST_TRANSPORT", 00:17:28.041 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:28.041 "adrfam": "ipv4", 00:17:28.041 "trsvcid": "$NVMF_PORT", 00:17:28.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:28.041 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:28.041 "hdgst": ${hdgst:-false}, 00:17:28.041 "ddgst": ${ddgst:-false} 00:17:28.041 }, 00:17:28.041 "method": "bdev_nvme_attach_controller" 00:17:28.041 } 00:17:28.041 EOF 00:17:28.041 )") 00:17:28.041 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:17:28.041 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:17:28.041 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:17:28.041 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:17:28.041 "params": { 00:17:28.041 "name": "Nvme1", 00:17:28.041 "trtype": "tcp", 00:17:28.041 "traddr": "10.0.0.3", 00:17:28.041 "adrfam": "ipv4", 00:17:28.041 "trsvcid": "4420", 00:17:28.041 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:28.041 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:28.041 "hdgst": false, 00:17:28.041 "ddgst": false 00:17:28.041 }, 00:17:28.041 "method": "bdev_nvme_attach_controller" 00:17:28.041 }' 00:17:28.041 [2024-11-20 11:30:13.572676] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:17:28.041 [2024-11-20 11:30:13.572763] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid82203 ] 00:17:28.300 [2024-11-20 11:30:13.728836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:28.300 [2024-11-20 11:30:13.800438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:28.300 [2024-11-20 11:30:13.800559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:28.300 [2024-11-20 11:30:13.800565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:28.558 I/O targets: 00:17:28.558 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:28.558 00:17:28.558 00:17:28.558 CUnit - A unit testing framework for C - Version 2.1-3 00:17:28.558 http://cunit.sourceforge.net/ 00:17:28.558 00:17:28.558 00:17:28.558 Suite: bdevio tests on: Nvme1n1 00:17:28.558 Test: blockdev write read block ...passed 00:17:28.558 Test: blockdev write zeroes read block ...passed 00:17:28.558 Test: blockdev write zeroes read no split ...passed 00:17:28.559 Test: blockdev write zeroes read split ...passed 00:17:28.817 Test: blockdev write zeroes read split partial ...passed 00:17:28.817 Test: blockdev reset ...[2024-11-20 11:30:14.150117] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:17:28.817 [2024-11-20 11:30:14.150251] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23f1380 (9): Bad file descriptor 00:17:28.817 [2024-11-20 11:30:14.162246] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:17:28.817 passed 00:17:28.817 Test: blockdev write read 8 blocks ...passed 00:17:28.817 Test: blockdev write read size > 128k ...passed 00:17:28.817 Test: blockdev write read invalid size ...passed 00:17:28.817 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:28.817 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:28.817 Test: blockdev write read max offset ...passed 00:17:28.817 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:28.817 Test: blockdev writev readv 8 blocks ...passed 00:17:28.817 Test: blockdev writev readv 30 x 1block ...passed 00:17:28.817 Test: blockdev writev readv block ...passed 00:17:28.817 Test: blockdev writev readv size > 128k ...passed 00:17:28.817 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:28.817 Test: blockdev comparev and writev ...[2024-11-20 11:30:14.338065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:28.817 [2024-11-20 11:30:14.338119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:28.817 [2024-11-20 11:30:14.338141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:28.817 [2024-11-20 11:30:14.338151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:28.817 [2024-11-20 11:30:14.338577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:28.817 [2024-11-20 11:30:14.338626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:28.817 [2024-11-20 11:30:14.338647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:28.817 [2024-11-20 11:30:14.338658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:28.817 [2024-11-20 11:30:14.338976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:28.817 [2024-11-20 11:30:14.339005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:28.817 [2024-11-20 11:30:14.339024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:28.817 [2024-11-20 11:30:14.339034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:28.817 [2024-11-20 11:30:14.339310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:28.817 [2024-11-20 11:30:14.339338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:28.817 [2024-11-20 11:30:14.339357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:28.817 [2024-11-20 11:30:14.339367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:28.817 passed 00:17:29.110 Test: blockdev nvme passthru rw ...passed 00:17:29.110 Test: blockdev nvme passthru vendor specific ...[2024-11-20 11:30:14.422319] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:29.110 [2024-11-20 11:30:14.422373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:29.110 [2024-11-20 11:30:14.422593] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:29.110 [2024-11-20 11:30:14.422631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:29.110 [2024-11-20 11:30:14.422831] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:29.110 [2024-11-20 11:30:14.422859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:29.110 [2024-11-20 11:30:14.423032] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:29.110 [2024-11-20 11:30:14.423059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:29.110 passed 00:17:29.110 Test: blockdev nvme admin passthru ...passed 00:17:29.110 Test: blockdev copy ...passed 00:17:29.110 00:17:29.110 Run Summary: Type Total Ran Passed Failed Inactive 00:17:29.110 suites 1 1 n/a 0 0 00:17:29.110 tests 23 23 23 0 0 00:17:29.110 asserts 152 152 152 0 n/a 00:17:29.110 00:17:29.110 Elapsed time = 0.930 seconds 00:17:29.368 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:29.368 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.368 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:29.368 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.368 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:29.368 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:17:29.368 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:29.368 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:17:29.368 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:29.368 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:17:29.368 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:29.368 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:29.368 rmmod nvme_tcp 00:17:29.368 rmmod nvme_fabrics 00:17:29.627 rmmod nvme_keyring 00:17:29.627 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:29.627 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:17:29.627 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:17:29.627 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 82157 ']' 00:17:29.627 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 82157 00:17:29.627 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 82157 ']' 00:17:29.627 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 82157 00:17:29.627 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:17:29.627 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:29.627 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82157 00:17:29.627 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:17:29.627 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:17:29.627 killing process with pid 82157 00:17:29.627 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82157' 00:17:29.627 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 82157 00:17:29.627 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 82157 00:17:29.886 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:29.886 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:29.886 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:29.886 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:17:29.886 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:17:29.886 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:17:29.886 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:29.886 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:29.886 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:29.886 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:29.886 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:29.886 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:29.886 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:30.145 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:30.145 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:30.145 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:30.145 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:30.145 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:30.145 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:30.145 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:30.145 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:30.145 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:30.145 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:30.145 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:30.145 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:30.145 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:30.145 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:17:30.145 00:17:30.145 real 0m3.231s 00:17:30.145 user 0m10.436s 00:17:30.145 sys 0m1.336s 00:17:30.145 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:30.145 ************************************ 00:17:30.145 END TEST nvmf_bdevio_no_huge 00:17:30.145 ************************************ 00:17:30.145 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:30.145 11:30:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:30.145 11:30:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:30.145 11:30:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:30.145 11:30:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:30.145 ************************************ 00:17:30.145 START TEST nvmf_tls 00:17:30.145 ************************************ 00:17:30.145 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:30.405 * Looking for test storage... 00:17:30.405 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:30.405 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:30.405 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:17:30.405 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:30.405 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:30.405 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:30.405 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:30.405 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:30.405 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:17:30.405 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:17:30.405 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:17:30.405 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:17:30.405 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:17:30.405 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:17:30.405 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:17:30.405 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:30.405 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:17:30.405 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:17:30.405 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:30.405 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:30.405 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:17:30.405 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:17:30.405 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:30.405 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:17:30.405 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:17:30.405 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:17:30.405 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:17:30.405 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:30.405 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:17:30.405 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:17:30.405 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:30.405 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:30.405 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:17:30.405 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:30.405 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:30.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.405 --rc genhtml_branch_coverage=1 00:17:30.405 --rc genhtml_function_coverage=1 00:17:30.405 --rc genhtml_legend=1 00:17:30.405 --rc geninfo_all_blocks=1 00:17:30.405 --rc geninfo_unexecuted_blocks=1 00:17:30.405 00:17:30.405 ' 00:17:30.405 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:30.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.405 --rc genhtml_branch_coverage=1 00:17:30.405 --rc genhtml_function_coverage=1 00:17:30.405 --rc genhtml_legend=1 00:17:30.405 --rc geninfo_all_blocks=1 00:17:30.405 --rc geninfo_unexecuted_blocks=1 00:17:30.405 00:17:30.405 ' 00:17:30.405 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:30.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.405 --rc genhtml_branch_coverage=1 00:17:30.405 --rc genhtml_function_coverage=1 00:17:30.405 --rc genhtml_legend=1 00:17:30.405 --rc geninfo_all_blocks=1 00:17:30.405 --rc geninfo_unexecuted_blocks=1 00:17:30.405 00:17:30.405 ' 00:17:30.405 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:30.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.405 --rc genhtml_branch_coverage=1 00:17:30.405 --rc genhtml_function_coverage=1 00:17:30.405 --rc genhtml_legend=1 00:17:30.405 --rc geninfo_all_blocks=1 00:17:30.405 --rc geninfo_unexecuted_blocks=1 00:17:30.405 00:17:30.405 ' 00:17:30.406 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:30.406 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:17:30.406 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:30.406 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:30.406 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:30.406 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:30.406 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:30.406 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:30.406 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:30.406 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:30.406 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:30.406 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:30.406 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:17:30.406 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=806d442c-08ba-4719-b76d-33169283459a 00:17:30.406 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:30.406 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:30.406 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:30.406 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:30.406 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:30.406 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:17:30.406 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:30.406 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:30.406 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:30.406 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.406 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.406 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.406 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:17:30.406 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.406 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:17:30.406 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:30.406 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:30.406 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:30.406 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:30.406 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:30.406 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:30.406 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:30.406 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:30.406 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:30.406 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:30.406 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:30.406 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:17:30.406 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:30.406 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:30.406 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:30.406 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:30.406 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:30.406 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:30.406 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:30.406 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:30.406 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:30.406 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:30.406 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:30.406 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:30.406 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:30.406 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:30.406 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:30.406 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:30.406 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:30.406 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:30.406 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:30.406 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:30.406 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:30.406 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:30.406 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:30.406 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:30.406 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:30.406 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:30.406 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:30.406 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:30.406 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:30.406 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:30.406 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:30.406 Cannot find device "nvmf_init_br" 00:17:30.406 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:17:30.406 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:30.406 Cannot find device "nvmf_init_br2" 00:17:30.406 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:17:30.406 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:30.406 Cannot find device "nvmf_tgt_br" 00:17:30.406 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:17:30.406 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:30.406 Cannot find device "nvmf_tgt_br2" 00:17:30.406 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:17:30.406 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:30.406 Cannot find device "nvmf_init_br" 00:17:30.406 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:17:30.406 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:30.665 Cannot find device "nvmf_init_br2" 00:17:30.665 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:17:30.665 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:30.665 Cannot find device "nvmf_tgt_br" 00:17:30.665 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:17:30.665 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:30.665 Cannot find device "nvmf_tgt_br2" 00:17:30.665 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:17:30.665 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:30.665 Cannot find device "nvmf_br" 00:17:30.665 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:17:30.665 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:30.665 Cannot find device "nvmf_init_if" 00:17:30.665 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:17:30.665 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:30.665 Cannot find device "nvmf_init_if2" 00:17:30.665 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:17:30.665 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:30.665 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:30.665 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:17:30.665 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:30.665 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:30.665 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:17:30.665 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:30.665 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:30.665 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:30.665 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:30.665 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:30.665 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:30.665 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:30.665 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:30.665 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:30.665 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:30.666 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:30.666 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:30.666 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:30.666 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:30.666 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:30.666 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:30.666 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:30.666 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:30.666 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:30.666 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:30.666 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:30.666 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:30.666 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:30.666 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:30.925 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:30.925 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:30.925 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:30.925 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:30.925 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:30.925 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:30.925 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:30.925 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:30.925 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:30.925 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:30.925 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:17:30.925 00:17:30.925 --- 10.0.0.3 ping statistics --- 00:17:30.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:30.925 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:17:30.925 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:30.925 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:30.925 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 00:17:30.925 00:17:30.925 --- 10.0.0.4 ping statistics --- 00:17:30.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:30.925 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:17:30.925 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:30.925 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:30.925 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:17:30.925 00:17:30.925 --- 10.0.0.1 ping statistics --- 00:17:30.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:30.925 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:17:30.926 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:30.926 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:30.926 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:17:30.926 00:17:30.926 --- 10.0.0.2 ping statistics --- 00:17:30.926 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:30.926 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:17:30.926 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:30.926 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@461 -- # return 0 00:17:30.926 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:30.926 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:30.926 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:30.926 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:30.926 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:30.926 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:30.926 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:30.926 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:17:30.926 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:30.926 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:30.926 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:30.926 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=82447 00:17:30.926 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 82447 00:17:30.926 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:17:30.926 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 82447 ']' 00:17:30.926 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:30.926 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:30.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:30.926 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:30.926 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:30.926 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:30.926 [2024-11-20 11:30:16.413118] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:17:30.926 [2024-11-20 11:30:16.413213] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:31.184 [2024-11-20 11:30:16.560216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:31.184 [2024-11-20 11:30:16.608042] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:31.184 [2024-11-20 11:30:16.608114] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:31.184 [2024-11-20 11:30:16.608134] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:31.184 [2024-11-20 11:30:16.608148] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:31.184 [2024-11-20 11:30:16.608160] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:31.184 [2024-11-20 11:30:16.608567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:31.184 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:31.184 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:31.184 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:31.184 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:31.184 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:31.184 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:31.184 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:17:31.185 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:17:31.443 true 00:17:31.443 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:31.443 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:17:32.008 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:17:32.008 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:17:32.008 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:32.267 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:32.267 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:17:32.526 11:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:17:32.526 11:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:17:32.526 11:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:17:32.785 11:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:32.785 11:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:17:33.353 11:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:17:33.353 11:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:17:33.353 11:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:33.353 11:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:17:33.611 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:17:33.611 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:17:33.611 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:17:33.869 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:33.869 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:17:34.436 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:17:34.436 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:17:34.436 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:17:34.694 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:34.694 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:17:35.261 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:17:35.261 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:17:35.261 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:17:35.261 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:17:35.261 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:17:35.261 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:17:35.261 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:17:35.261 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:17:35.261 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:17:35.261 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:35.261 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:17:35.261 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:17:35.261 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:17:35.261 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:17:35.261 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:17:35.261 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:17:35.261 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:17:35.261 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:35.261 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:17:35.261 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.7HhWLWcIWE 00:17:35.261 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:17:35.261 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.Lq7nCxeCxx 00:17:35.261 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:35.261 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:35.261 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.7HhWLWcIWE 00:17:35.261 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.Lq7nCxeCxx 00:17:35.261 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:35.519 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:17:35.777 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.7HhWLWcIWE 00:17:35.777 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.7HhWLWcIWE 00:17:35.777 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:36.034 [2024-11-20 11:30:21.612224] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:36.292 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:36.549 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:17:36.807 [2024-11-20 11:30:22.280472] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:36.807 [2024-11-20 11:30:22.280818] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:36.807 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:37.064 malloc0 00:17:37.064 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:37.628 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.7HhWLWcIWE 00:17:37.885 11:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:38.143 11:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.7HhWLWcIWE 00:17:50.341 Initializing NVMe Controllers 00:17:50.341 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:50.341 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:50.341 Initialization complete. Launching workers. 00:17:50.341 ======================================================== 00:17:50.341 Latency(us) 00:17:50.341 Device Information : IOPS MiB/s Average min max 00:17:50.341 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9012.75 35.21 7102.84 1138.17 11813.16 00:17:50.341 ======================================================== 00:17:50.341 Total : 9012.75 35.21 7102.84 1138.17 11813.16 00:17:50.341 00:17:50.341 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.7HhWLWcIWE 00:17:50.341 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:50.341 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:50.341 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:50.341 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.7HhWLWcIWE 00:17:50.341 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:50.341 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=82817 00:17:50.341 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:50.341 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 82817 /var/tmp/bdevperf.sock 00:17:50.341 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 82817 ']' 00:17:50.341 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:50.341 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:50.341 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:50.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:50.341 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:50.341 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:50.341 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:50.341 [2024-11-20 11:30:33.824397] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:17:50.341 [2024-11-20 11:30:33.824496] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82817 ] 00:17:50.341 [2024-11-20 11:30:33.969865] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.341 [2024-11-20 11:30:34.010776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:50.341 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:50.341 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:50.341 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.7HhWLWcIWE 00:17:50.341 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:50.341 [2024-11-20 11:30:34.699899] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:50.341 TLSTESTn1 00:17:50.341 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:50.341 Running I/O for 10 seconds... 00:17:51.718 3446.00 IOPS, 13.46 MiB/s [2024-11-20T11:30:38.242Z] 3627.00 IOPS, 14.17 MiB/s [2024-11-20T11:30:39.177Z] 3447.00 IOPS, 13.46 MiB/s [2024-11-20T11:30:40.113Z] 3483.50 IOPS, 13.61 MiB/s [2024-11-20T11:30:41.048Z] 3465.80 IOPS, 13.54 MiB/s [2024-11-20T11:30:41.983Z] 3448.33 IOPS, 13.47 MiB/s [2024-11-20T11:30:43.357Z] 3440.86 IOPS, 13.44 MiB/s [2024-11-20T11:30:44.291Z] 3426.12 IOPS, 13.38 MiB/s [2024-11-20T11:30:44.931Z] 3451.44 IOPS, 13.48 MiB/s [2024-11-20T11:30:45.191Z] 3496.10 IOPS, 13.66 MiB/s 00:17:59.602 Latency(us) 00:17:59.602 [2024-11-20T11:30:45.191Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:59.602 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:59.602 Verification LBA range: start 0x0 length 0x2000 00:17:59.602 TLSTESTn1 : 10.02 3503.22 13.68 0.00 0.00 36472.97 5421.61 43611.23 00:17:59.602 [2024-11-20T11:30:45.191Z] =================================================================================================================== 00:17:59.602 [2024-11-20T11:30:45.191Z] Total : 3503.22 13.68 0.00 0.00 36472.97 5421.61 43611.23 00:17:59.602 { 00:17:59.602 "results": [ 00:17:59.602 { 00:17:59.602 "job": "TLSTESTn1", 00:17:59.602 "core_mask": "0x4", 00:17:59.602 "workload": "verify", 00:17:59.602 "status": "finished", 00:17:59.602 "verify_range": { 00:17:59.602 "start": 0, 00:17:59.602 "length": 8192 00:17:59.602 }, 00:17:59.602 "queue_depth": 128, 00:17:59.602 "io_size": 4096, 00:17:59.602 "runtime": 10.016203, 00:17:59.602 "iops": 3503.223726595797, 00:17:59.602 "mibps": 13.684467682014832, 00:17:59.602 "io_failed": 0, 00:17:59.602 "io_timeout": 0, 00:17:59.602 "avg_latency_us": 36472.97286867938, 00:17:59.602 "min_latency_us": 5421.614545454546, 00:17:59.602 "max_latency_us": 43611.22909090909 00:17:59.602 } 00:17:59.602 ], 00:17:59.602 "core_count": 1 00:17:59.602 } 00:17:59.602 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:59.602 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 82817 00:17:59.602 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 82817 ']' 00:17:59.602 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 82817 00:17:59.602 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:59.602 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:59.602 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82817 00:17:59.602 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:59.602 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:59.602 killing process with pid 82817 00:17:59.602 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82817' 00:17:59.602 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 82817 00:17:59.602 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 82817 00:17:59.602 Received shutdown signal, test time was about 10.000000 seconds 00:17:59.602 00:17:59.602 Latency(us) 00:17:59.602 [2024-11-20T11:30:45.191Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:59.602 [2024-11-20T11:30:45.191Z] =================================================================================================================== 00:17:59.602 [2024-11-20T11:30:45.191Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:59.602 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Lq7nCxeCxx 00:17:59.602 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:17:59.602 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Lq7nCxeCxx 00:17:59.602 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:17:59.602 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:59.602 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:17:59.602 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:59.602 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Lq7nCxeCxx 00:17:59.602 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:59.602 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:59.602 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:59.602 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Lq7nCxeCxx 00:17:59.602 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:59.602 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=82957 00:17:59.602 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:59.602 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 82957 /var/tmp/bdevperf.sock 00:17:59.602 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:59.602 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 82957 ']' 00:17:59.602 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:59.602 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:59.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:59.602 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:59.602 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:59.602 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:59.861 [2024-11-20 11:30:45.202253] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:17:59.861 [2024-11-20 11:30:45.202381] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82957 ] 00:17:59.861 [2024-11-20 11:30:45.394343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.861 [2024-11-20 11:30:45.445476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:00.795 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:00.795 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:00.795 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Lq7nCxeCxx 00:18:01.053 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:01.312 [2024-11-20 11:30:46.871867] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:01.312 [2024-11-20 11:30:46.879871] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:01.312 [2024-11-20 11:30:46.880690] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffac0 (107): Transport endpoint is not connected 00:18:01.312 [2024-11-20 11:30:46.881676] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffac0 (9): Bad file descriptor 00:18:01.312 [2024-11-20 11:30:46.882672] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:01.312 [2024-11-20 11:30:46.882703] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:18:01.312 [2024-11-20 11:30:46.882715] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:01.312 [2024-11-20 11:30:46.882733] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:01.312 2024/11/20 11:30:46 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:18:01.312 request: 00:18:01.312 { 00:18:01.312 "method": "bdev_nvme_attach_controller", 00:18:01.312 "params": { 00:18:01.312 "name": "TLSTEST", 00:18:01.312 "trtype": "tcp", 00:18:01.312 "traddr": "10.0.0.3", 00:18:01.312 "adrfam": "ipv4", 00:18:01.312 "trsvcid": "4420", 00:18:01.312 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:01.312 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:01.312 "prchk_reftag": false, 00:18:01.312 "prchk_guard": false, 00:18:01.312 "hdgst": false, 00:18:01.312 "ddgst": false, 00:18:01.312 "psk": "key0", 00:18:01.312 "allow_unrecognized_csi": false 00:18:01.312 } 00:18:01.312 } 00:18:01.312 Got JSON-RPC error response 00:18:01.312 GoRPCClient: error on JSON-RPC call 00:18:01.571 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 82957 00:18:01.571 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 82957 ']' 00:18:01.571 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 82957 00:18:01.571 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:01.571 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:01.571 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82957 00:18:01.571 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:01.571 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:01.571 killing process with pid 82957 00:18:01.571 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82957' 00:18:01.571 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 82957 00:18:01.571 Received shutdown signal, test time was about 10.000000 seconds 00:18:01.571 00:18:01.571 Latency(us) 00:18:01.571 [2024-11-20T11:30:47.160Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:01.571 [2024-11-20T11:30:47.160Z] =================================================================================================================== 00:18:01.571 [2024-11-20T11:30:47.160Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:01.571 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 82957 00:18:01.571 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:01.571 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:01.571 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:01.571 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:01.571 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:01.572 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.7HhWLWcIWE 00:18:01.572 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:01.572 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.7HhWLWcIWE 00:18:01.572 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:01.572 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:01.572 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:01.572 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:01.572 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.7HhWLWcIWE 00:18:01.572 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:01.572 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:01.572 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:01.572 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.7HhWLWcIWE 00:18:01.572 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:01.572 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83015 00:18:01.572 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:01.572 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:01.572 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83015 /var/tmp/bdevperf.sock 00:18:01.572 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83015 ']' 00:18:01.572 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:01.572 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:01.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:01.572 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:01.572 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:01.572 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:01.572 [2024-11-20 11:30:47.112864] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:18:01.572 [2024-11-20 11:30:47.113000] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83015 ] 00:18:01.831 [2024-11-20 11:30:47.258336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.831 [2024-11-20 11:30:47.292496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:01.831 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:01.831 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:01.831 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.7HhWLWcIWE 00:18:02.090 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:18:02.348 [2024-11-20 11:30:47.916418] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:02.348 [2024-11-20 11:30:47.921605] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:02.348 [2024-11-20 11:30:47.921683] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:02.348 [2024-11-20 11:30:47.921760] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:02.348 [2024-11-20 11:30:47.922340] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c23ac0 (107): Transport endpoint is not connected 00:18:02.348 [2024-11-20 11:30:47.923322] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c23ac0 (9): Bad file descriptor 00:18:02.348 [2024-11-20 11:30:47.924318] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:02.348 [2024-11-20 11:30:47.924351] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:18:02.348 [2024-11-20 11:30:47.924366] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:02.348 [2024-11-20 11:30:47.924383] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:02.349 2024/11/20 11:30:47 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:18:02.349 request: 00:18:02.349 { 00:18:02.349 "method": "bdev_nvme_attach_controller", 00:18:02.349 "params": { 00:18:02.349 "name": "TLSTEST", 00:18:02.349 "trtype": "tcp", 00:18:02.349 "traddr": "10.0.0.3", 00:18:02.349 "adrfam": "ipv4", 00:18:02.349 "trsvcid": "4420", 00:18:02.349 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:02.349 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:02.349 "prchk_reftag": false, 00:18:02.349 "prchk_guard": false, 00:18:02.349 "hdgst": false, 00:18:02.349 "ddgst": false, 00:18:02.349 "psk": "key0", 00:18:02.349 "allow_unrecognized_csi": false 00:18:02.349 } 00:18:02.349 } 00:18:02.349 Got JSON-RPC error response 00:18:02.349 GoRPCClient: error on JSON-RPC call 00:18:02.609 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83015 00:18:02.609 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83015 ']' 00:18:02.609 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83015 00:18:02.609 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:02.609 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:02.609 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83015 00:18:02.609 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:02.609 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:02.609 killing process with pid 83015 00:18:02.609 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83015' 00:18:02.609 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83015 00:18:02.609 Received shutdown signal, test time was about 10.000000 seconds 00:18:02.609 00:18:02.609 Latency(us) 00:18:02.609 [2024-11-20T11:30:48.198Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:02.609 [2024-11-20T11:30:48.198Z] =================================================================================================================== 00:18:02.609 [2024-11-20T11:30:48.198Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:02.609 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83015 00:18:02.609 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:02.609 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:02.609 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:02.609 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:02.609 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:02.609 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.7HhWLWcIWE 00:18:02.609 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:02.609 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.7HhWLWcIWE 00:18:02.609 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:02.609 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:02.609 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:02.609 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:02.609 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.7HhWLWcIWE 00:18:02.609 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:02.609 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:18:02.609 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:02.609 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.7HhWLWcIWE 00:18:02.609 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:02.609 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83054 00:18:02.609 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:02.609 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:02.609 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83054 /var/tmp/bdevperf.sock 00:18:02.609 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83054 ']' 00:18:02.609 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:02.609 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:02.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:02.609 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:02.609 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:02.609 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:02.609 [2024-11-20 11:30:48.185431] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:18:02.609 [2024-11-20 11:30:48.185566] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83054 ] 00:18:02.869 [2024-11-20 11:30:48.337281] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.869 [2024-11-20 11:30:48.371077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:02.869 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:02.869 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:02.869 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.7HhWLWcIWE 00:18:03.436 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:03.436 [2024-11-20 11:30:49.012640] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:03.436 [2024-11-20 11:30:49.019637] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:03.436 [2024-11-20 11:30:49.019687] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:03.436 [2024-11-20 11:30:49.019743] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:03.436 [2024-11-20 11:30:49.020456] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a2bac0 (107): Transport endpoint is not connected 00:18:03.436 [2024-11-20 11:30:49.021448] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a2bac0 (9): Bad file descriptor 00:18:03.436 [2024-11-20 11:30:49.022444] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:18:03.436 [2024-11-20 11:30:49.022480] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:18:03.436 [2024-11-20 11:30:49.022495] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:18:03.436 [2024-11-20 11:30:49.022513] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:18:03.707 2024/11/20 11:30:49 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:18:03.707 request: 00:18:03.707 { 00:18:03.707 "method": "bdev_nvme_attach_controller", 00:18:03.707 "params": { 00:18:03.707 "name": "TLSTEST", 00:18:03.707 "trtype": "tcp", 00:18:03.707 "traddr": "10.0.0.3", 00:18:03.707 "adrfam": "ipv4", 00:18:03.707 "trsvcid": "4420", 00:18:03.707 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:03.707 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:03.707 "prchk_reftag": false, 00:18:03.707 "prchk_guard": false, 00:18:03.707 "hdgst": false, 00:18:03.707 "ddgst": false, 00:18:03.707 "psk": "key0", 00:18:03.707 "allow_unrecognized_csi": false 00:18:03.707 } 00:18:03.707 } 00:18:03.707 Got JSON-RPC error response 00:18:03.707 GoRPCClient: error on JSON-RPC call 00:18:03.707 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83054 00:18:03.707 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83054 ']' 00:18:03.707 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83054 00:18:03.707 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:03.707 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:03.707 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83054 00:18:03.707 killing process with pid 83054 00:18:03.707 Received shutdown signal, test time was about 10.000000 seconds 00:18:03.708 00:18:03.708 Latency(us) 00:18:03.708 [2024-11-20T11:30:49.297Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:03.708 [2024-11-20T11:30:49.297Z] =================================================================================================================== 00:18:03.708 [2024-11-20T11:30:49.297Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:03.708 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:03.708 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:03.708 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83054' 00:18:03.708 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83054 00:18:03.708 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83054 00:18:03.708 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:03.708 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:03.708 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:03.708 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:03.708 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:03.708 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:03.708 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:03.708 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:03.708 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:03.709 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:03.709 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:03.709 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:03.709 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:03.709 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:03.709 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:03.709 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:03.709 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:18:03.709 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:03.709 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83093 00:18:03.709 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:03.709 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:03.709 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83093 /var/tmp/bdevperf.sock 00:18:03.709 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83093 ']' 00:18:03.709 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:03.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:03.709 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:03.709 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:03.709 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:03.709 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:03.709 [2024-11-20 11:30:49.270110] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:18:03.710 [2024-11-20 11:30:49.270205] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83093 ] 00:18:03.976 [2024-11-20 11:30:49.414248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.976 [2024-11-20 11:30:49.447717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:03.976 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:03.976 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:03.976 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:18:04.234 [2024-11-20 11:30:49.809209] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:18:04.234 [2024-11-20 11:30:49.809258] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:04.234 2024/11/20 11:30:49 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:18:04.234 request: 00:18:04.234 { 00:18:04.234 "method": "keyring_file_add_key", 00:18:04.234 "params": { 00:18:04.234 "name": "key0", 00:18:04.234 "path": "" 00:18:04.234 } 00:18:04.234 } 00:18:04.234 Got JSON-RPC error response 00:18:04.234 GoRPCClient: error on JSON-RPC call 00:18:04.492 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:04.751 [2024-11-20 11:30:50.133379] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:04.751 [2024-11-20 11:30:50.133847] bdev_nvme.c:6717:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:04.751 2024/11/20 11:30:50 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-126 Msg=Required key not available 00:18:04.751 request: 00:18:04.751 { 00:18:04.751 "method": "bdev_nvme_attach_controller", 00:18:04.751 "params": { 00:18:04.751 "name": "TLSTEST", 00:18:04.751 "trtype": "tcp", 00:18:04.751 "traddr": "10.0.0.3", 00:18:04.751 "adrfam": "ipv4", 00:18:04.751 "trsvcid": "4420", 00:18:04.751 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:04.751 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:04.751 "prchk_reftag": false, 00:18:04.751 "prchk_guard": false, 00:18:04.751 "hdgst": false, 00:18:04.751 "ddgst": false, 00:18:04.751 "psk": "key0", 00:18:04.751 "allow_unrecognized_csi": false 00:18:04.751 } 00:18:04.751 } 00:18:04.751 Got JSON-RPC error response 00:18:04.751 GoRPCClient: error on JSON-RPC call 00:18:04.751 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83093 00:18:04.751 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83093 ']' 00:18:04.751 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83093 00:18:04.751 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:04.751 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:04.751 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83093 00:18:04.751 killing process with pid 83093 00:18:04.751 Received shutdown signal, test time was about 10.000000 seconds 00:18:04.751 00:18:04.751 Latency(us) 00:18:04.751 [2024-11-20T11:30:50.340Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:04.751 [2024-11-20T11:30:50.340Z] =================================================================================================================== 00:18:04.751 [2024-11-20T11:30:50.340Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:04.751 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:04.751 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:04.751 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83093' 00:18:04.751 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83093 00:18:04.751 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83093 00:18:04.751 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:04.751 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:04.751 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:04.751 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:04.751 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:04.751 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 82447 00:18:04.751 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 82447 ']' 00:18:04.751 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 82447 00:18:04.751 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:04.751 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:04.751 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82447 00:18:05.009 killing process with pid 82447 00:18:05.009 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:05.009 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:05.010 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82447' 00:18:05.010 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 82447 00:18:05.010 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 82447 00:18:05.010 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:18:05.010 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:18:05.010 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:05.010 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:05.010 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:05.010 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:18:05.010 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:05.010 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:05.010 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:18:05.010 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.KsWL0pcwzf 00:18:05.010 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:05.010 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.KsWL0pcwzf 00:18:05.010 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:18:05.010 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:05.010 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:05.010 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:05.010 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=83142 00:18:05.010 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:05.010 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 83142 00:18:05.010 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83142 ']' 00:18:05.010 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:05.010 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:05.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:05.010 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:05.010 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:05.010 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:05.268 [2024-11-20 11:30:50.622694] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:18:05.268 [2024-11-20 11:30:50.622826] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:05.268 [2024-11-20 11:30:50.776022] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.268 [2024-11-20 11:30:50.809064] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:05.268 [2024-11-20 11:30:50.809118] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:05.268 [2024-11-20 11:30:50.809131] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:05.268 [2024-11-20 11:30:50.809139] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:05.268 [2024-11-20 11:30:50.809146] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:05.268 [2024-11-20 11:30:50.809447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:06.204 11:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:06.204 11:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:06.204 11:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:06.204 11:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:06.204 11:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:06.204 11:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:06.204 11:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.KsWL0pcwzf 00:18:06.204 11:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.KsWL0pcwzf 00:18:06.204 11:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:06.462 [2024-11-20 11:30:51.886535] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:06.462 11:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:06.732 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:18:07.001 [2024-11-20 11:30:52.526701] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:07.001 [2024-11-20 11:30:52.526936] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:07.001 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:07.259 malloc0 00:18:07.259 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:07.826 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.KsWL0pcwzf 00:18:08.084 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:08.342 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.KsWL0pcwzf 00:18:08.342 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:08.342 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:08.342 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:08.342 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.KsWL0pcwzf 00:18:08.342 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:08.342 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83257 00:18:08.343 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:08.343 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:08.343 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83257 /var/tmp/bdevperf.sock 00:18:08.343 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83257 ']' 00:18:08.343 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:08.343 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:08.343 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:08.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:08.343 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:08.343 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:08.343 [2024-11-20 11:30:53.840751] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:18:08.343 [2024-11-20 11:30:53.840856] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83257 ] 00:18:08.600 [2024-11-20 11:30:53.987684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.600 [2024-11-20 11:30:54.022035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:08.600 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:08.600 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:08.600 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.KsWL0pcwzf 00:18:09.167 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:09.167 [2024-11-20 11:30:54.711940] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:09.425 TLSTESTn1 00:18:09.425 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:09.425 Running I/O for 10 seconds... 00:18:11.737 3870.00 IOPS, 15.12 MiB/s [2024-11-20T11:30:58.260Z] 3832.00 IOPS, 14.97 MiB/s [2024-11-20T11:30:59.195Z] 3820.33 IOPS, 14.92 MiB/s [2024-11-20T11:31:00.128Z] 3864.00 IOPS, 15.09 MiB/s [2024-11-20T11:31:01.061Z] 3892.60 IOPS, 15.21 MiB/s [2024-11-20T11:31:01.997Z] 3908.83 IOPS, 15.27 MiB/s [2024-11-20T11:31:02.947Z] 3909.57 IOPS, 15.27 MiB/s [2024-11-20T11:31:04.388Z] 3929.88 IOPS, 15.35 MiB/s [2024-11-20T11:31:04.971Z] 3944.22 IOPS, 15.41 MiB/s [2024-11-20T11:31:04.971Z] 3952.80 IOPS, 15.44 MiB/s 00:18:19.382 Latency(us) 00:18:19.382 [2024-11-20T11:31:04.971Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:19.382 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:19.382 Verification LBA range: start 0x0 length 0x2000 00:18:19.382 TLSTESTn1 : 10.02 3959.14 15.47 0.00 0.00 32270.90 5093.93 32887.16 00:18:19.382 [2024-11-20T11:31:04.971Z] =================================================================================================================== 00:18:19.382 [2024-11-20T11:31:04.971Z] Total : 3959.14 15.47 0.00 0.00 32270.90 5093.93 32887.16 00:18:19.382 { 00:18:19.382 "results": [ 00:18:19.382 { 00:18:19.382 "job": "TLSTESTn1", 00:18:19.382 "core_mask": "0x4", 00:18:19.382 "workload": "verify", 00:18:19.382 "status": "finished", 00:18:19.382 "verify_range": { 00:18:19.382 "start": 0, 00:18:19.382 "length": 8192 00:18:19.382 }, 00:18:19.382 "queue_depth": 128, 00:18:19.382 "io_size": 4096, 00:18:19.382 "runtime": 10.015809, 00:18:19.382 "iops": 3959.1409940025815, 00:18:19.382 "mibps": 15.465394507822584, 00:18:19.382 "io_failed": 0, 00:18:19.382 "io_timeout": 0, 00:18:19.382 "avg_latency_us": 32270.90177599875, 00:18:19.382 "min_latency_us": 5093.9345454545455, 00:18:19.382 "max_latency_us": 32887.156363636364 00:18:19.382 } 00:18:19.382 ], 00:18:19.382 "core_count": 1 00:18:19.382 } 00:18:19.382 11:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:19.382 11:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 83257 00:18:19.382 11:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83257 ']' 00:18:19.382 11:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83257 00:18:19.382 11:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:19.382 11:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:19.382 11:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83257 00:18:19.641 11:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:19.641 killing process with pid 83257 00:18:19.641 11:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:19.641 11:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83257' 00:18:19.641 11:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83257 00:18:19.641 11:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83257 00:18:19.641 Received shutdown signal, test time was about 10.000000 seconds 00:18:19.641 00:18:19.641 Latency(us) 00:18:19.641 [2024-11-20T11:31:05.230Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:19.641 [2024-11-20T11:31:05.230Z] =================================================================================================================== 00:18:19.641 [2024-11-20T11:31:05.230Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:19.641 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.KsWL0pcwzf 00:18:19.642 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.KsWL0pcwzf 00:18:19.642 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:19.642 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.KsWL0pcwzf 00:18:19.642 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:19.642 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:19.642 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:19.642 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:19.642 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.KsWL0pcwzf 00:18:19.642 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:19.642 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:19.642 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:19.642 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.KsWL0pcwzf 00:18:19.642 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:19.642 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83398 00:18:19.642 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:19.642 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83398 /var/tmp/bdevperf.sock 00:18:19.642 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83398 ']' 00:18:19.642 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:19.642 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:19.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:19.642 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:19.642 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:19.642 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:19.642 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:19.642 [2024-11-20 11:31:05.180747] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:18:19.642 [2024-11-20 11:31:05.180831] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83398 ] 00:18:19.900 [2024-11-20 11:31:05.329146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.900 [2024-11-20 11:31:05.369891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:19.900 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:19.900 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:19.900 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.KsWL0pcwzf 00:18:20.158 [2024-11-20 11:31:05.738185] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.KsWL0pcwzf': 0100666 00:18:20.158 [2024-11-20 11:31:05.738239] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:20.158 2024/11/20 11:31:05 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.KsWL0pcwzf], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:18:20.158 request: 00:18:20.158 { 00:18:20.158 "method": "keyring_file_add_key", 00:18:20.158 "params": { 00:18:20.158 "name": "key0", 00:18:20.158 "path": "/tmp/tmp.KsWL0pcwzf" 00:18:20.158 } 00:18:20.158 } 00:18:20.158 Got JSON-RPC error response 00:18:20.158 GoRPCClient: error on JSON-RPC call 00:18:20.417 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:20.675 [2024-11-20 11:31:06.030470] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:20.675 [2024-11-20 11:31:06.030546] bdev_nvme.c:6717:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:20.676 2024/11/20 11:31:06 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-126 Msg=Required key not available 00:18:20.676 request: 00:18:20.676 { 00:18:20.676 "method": "bdev_nvme_attach_controller", 00:18:20.676 "params": { 00:18:20.676 "name": "TLSTEST", 00:18:20.676 "trtype": "tcp", 00:18:20.676 "traddr": "10.0.0.3", 00:18:20.676 "adrfam": "ipv4", 00:18:20.676 "trsvcid": "4420", 00:18:20.676 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:20.676 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:20.676 "prchk_reftag": false, 00:18:20.676 "prchk_guard": false, 00:18:20.676 "hdgst": false, 00:18:20.676 "ddgst": false, 00:18:20.676 "psk": "key0", 00:18:20.676 "allow_unrecognized_csi": false 00:18:20.676 } 00:18:20.676 } 00:18:20.676 Got JSON-RPC error response 00:18:20.676 GoRPCClient: error on JSON-RPC call 00:18:20.676 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83398 00:18:20.676 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83398 ']' 00:18:20.676 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83398 00:18:20.676 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:20.676 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:20.676 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83398 00:18:20.676 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:20.676 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:20.676 killing process with pid 83398 00:18:20.676 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83398' 00:18:20.676 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83398 00:18:20.676 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83398 00:18:20.676 Received shutdown signal, test time was about 10.000000 seconds 00:18:20.676 00:18:20.676 Latency(us) 00:18:20.676 [2024-11-20T11:31:06.265Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:20.676 [2024-11-20T11:31:06.265Z] =================================================================================================================== 00:18:20.676 [2024-11-20T11:31:06.265Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:20.676 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:20.676 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:20.676 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:20.676 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:20.676 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:20.676 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 83142 00:18:20.676 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83142 ']' 00:18:20.676 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83142 00:18:20.676 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:20.676 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:20.676 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83142 00:18:20.676 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:20.676 killing process with pid 83142 00:18:20.676 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:20.676 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83142' 00:18:20.676 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83142 00:18:20.676 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83142 00:18:20.934 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:18:20.934 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:20.934 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:20.934 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:20.934 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=83449 00:18:20.934 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 83449 00:18:20.934 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83449 ']' 00:18:20.934 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:20.935 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:20.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:20.935 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:20.935 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:20.935 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:20.935 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:20.935 [2024-11-20 11:31:06.445170] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:18:20.935 [2024-11-20 11:31:06.445286] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:21.193 [2024-11-20 11:31:06.590599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:21.193 [2024-11-20 11:31:06.635810] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:21.193 [2024-11-20 11:31:06.635885] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:21.193 [2024-11-20 11:31:06.635904] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:21.193 [2024-11-20 11:31:06.635920] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:21.193 [2024-11-20 11:31:06.635932] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:21.193 [2024-11-20 11:31:06.636335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:21.193 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:21.193 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:21.193 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:21.193 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:21.193 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:21.193 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:21.193 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.KsWL0pcwzf 00:18:21.193 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:21.193 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.KsWL0pcwzf 00:18:21.193 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:18:21.193 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:21.193 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:18:21.193 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:21.193 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.KsWL0pcwzf 00:18:21.193 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.KsWL0pcwzf 00:18:21.193 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:21.760 [2024-11-20 11:31:07.071646] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:21.760 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:22.018 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:18:22.276 [2024-11-20 11:31:07.723797] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:22.276 [2024-11-20 11:31:07.724023] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:22.276 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:22.534 malloc0 00:18:22.534 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:23.101 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.KsWL0pcwzf 00:18:23.101 [2024-11-20 11:31:08.638795] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.KsWL0pcwzf': 0100666 00:18:23.101 [2024-11-20 11:31:08.638848] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:23.101 2024/11/20 11:31:08 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.KsWL0pcwzf], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:18:23.101 request: 00:18:23.101 { 00:18:23.101 "method": "keyring_file_add_key", 00:18:23.101 "params": { 00:18:23.101 "name": "key0", 00:18:23.101 "path": "/tmp/tmp.KsWL0pcwzf" 00:18:23.101 } 00:18:23.101 } 00:18:23.101 Got JSON-RPC error response 00:18:23.101 GoRPCClient: error on JSON-RPC call 00:18:23.101 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:23.360 [2024-11-20 11:31:08.906870] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:18:23.360 [2024-11-20 11:31:08.906940] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:18:23.360 2024/11/20 11:31:08 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:key0], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:18:23.360 request: 00:18:23.360 { 00:18:23.360 "method": "nvmf_subsystem_add_host", 00:18:23.360 "params": { 00:18:23.360 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:23.360 "host": "nqn.2016-06.io.spdk:host1", 00:18:23.360 "psk": "key0" 00:18:23.360 } 00:18:23.360 } 00:18:23.360 Got JSON-RPC error response 00:18:23.360 GoRPCClient: error on JSON-RPC call 00:18:23.360 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:23.360 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:23.360 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:23.360 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:23.360 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 83449 00:18:23.360 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83449 ']' 00:18:23.360 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83449 00:18:23.360 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:23.360 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:23.360 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83449 00:18:23.619 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:23.619 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:23.619 killing process with pid 83449 00:18:23.619 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83449' 00:18:23.619 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83449 00:18:23.619 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83449 00:18:23.619 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.KsWL0pcwzf 00:18:23.619 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:18:23.619 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:23.619 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:23.619 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:23.619 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=83558 00:18:23.619 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:23.619 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 83558 00:18:23.619 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83558 ']' 00:18:23.619 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:23.619 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:23.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:23.619 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:23.619 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:23.619 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:23.619 [2024-11-20 11:31:09.180184] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:18:23.619 [2024-11-20 11:31:09.180319] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:23.877 [2024-11-20 11:31:09.334292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:23.877 [2024-11-20 11:31:09.371746] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:23.877 [2024-11-20 11:31:09.371809] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:23.877 [2024-11-20 11:31:09.371824] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:23.877 [2024-11-20 11:31:09.371834] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:23.877 [2024-11-20 11:31:09.371842] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:23.877 [2024-11-20 11:31:09.372192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:24.135 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:24.135 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:24.135 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:24.135 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:24.135 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:24.135 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:24.135 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.KsWL0pcwzf 00:18:24.135 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.KsWL0pcwzf 00:18:24.135 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:24.392 [2024-11-20 11:31:09.787776] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:24.392 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:24.650 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:18:24.909 [2024-11-20 11:31:10.459950] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:24.909 [2024-11-20 11:31:10.460207] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:24.909 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:25.272 malloc0 00:18:25.272 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:25.531 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.KsWL0pcwzf 00:18:25.789 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:26.356 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=83661 00:18:26.356 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:26.356 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 83661 /var/tmp/bdevperf.sock 00:18:26.356 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83661 ']' 00:18:26.357 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:26.357 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:26.357 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:26.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:26.357 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:26.357 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:26.357 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:26.357 [2024-11-20 11:31:11.765587] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:18:26.357 [2024-11-20 11:31:11.765724] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83661 ] 00:18:26.357 [2024-11-20 11:31:11.912353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.614 [2024-11-20 11:31:11.946740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:26.615 11:31:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:26.615 11:31:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:26.615 11:31:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.KsWL0pcwzf 00:18:26.873 11:31:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:27.440 [2024-11-20 11:31:12.742698] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:27.440 TLSTESTn1 00:18:27.440 11:31:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:18:27.699 11:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:18:27.699 "subsystems": [ 00:18:27.699 { 00:18:27.699 "subsystem": "keyring", 00:18:27.699 "config": [ 00:18:27.699 { 00:18:27.699 "method": "keyring_file_add_key", 00:18:27.699 "params": { 00:18:27.699 "name": "key0", 00:18:27.699 "path": "/tmp/tmp.KsWL0pcwzf" 00:18:27.699 } 00:18:27.699 } 00:18:27.699 ] 00:18:27.699 }, 00:18:27.699 { 00:18:27.699 "subsystem": "iobuf", 00:18:27.699 "config": [ 00:18:27.699 { 00:18:27.699 "method": "iobuf_set_options", 00:18:27.699 "params": { 00:18:27.699 "enable_numa": false, 00:18:27.699 "large_bufsize": 135168, 00:18:27.699 "large_pool_count": 1024, 00:18:27.699 "small_bufsize": 8192, 00:18:27.699 "small_pool_count": 8192 00:18:27.699 } 00:18:27.699 } 00:18:27.699 ] 00:18:27.699 }, 00:18:27.699 { 00:18:27.699 "subsystem": "sock", 00:18:27.699 "config": [ 00:18:27.699 { 00:18:27.699 "method": "sock_set_default_impl", 00:18:27.699 "params": { 00:18:27.699 "impl_name": "posix" 00:18:27.699 } 00:18:27.699 }, 00:18:27.699 { 00:18:27.699 "method": "sock_impl_set_options", 00:18:27.699 "params": { 00:18:27.699 "enable_ktls": false, 00:18:27.699 "enable_placement_id": 0, 00:18:27.699 "enable_quickack": false, 00:18:27.699 "enable_recv_pipe": true, 00:18:27.699 "enable_zerocopy_send_client": false, 00:18:27.699 "enable_zerocopy_send_server": true, 00:18:27.699 "impl_name": "ssl", 00:18:27.699 "recv_buf_size": 4096, 00:18:27.699 "send_buf_size": 4096, 00:18:27.699 "tls_version": 0, 00:18:27.699 "zerocopy_threshold": 0 00:18:27.699 } 00:18:27.699 }, 00:18:27.699 { 00:18:27.699 "method": "sock_impl_set_options", 00:18:27.699 "params": { 00:18:27.699 "enable_ktls": false, 00:18:27.699 "enable_placement_id": 0, 00:18:27.699 "enable_quickack": false, 00:18:27.699 "enable_recv_pipe": true, 00:18:27.699 "enable_zerocopy_send_client": false, 00:18:27.699 "enable_zerocopy_send_server": true, 00:18:27.699 "impl_name": "posix", 00:18:27.699 "recv_buf_size": 2097152, 00:18:27.699 "send_buf_size": 2097152, 00:18:27.699 "tls_version": 0, 00:18:27.699 "zerocopy_threshold": 0 00:18:27.699 } 00:18:27.699 } 00:18:27.699 ] 00:18:27.699 }, 00:18:27.699 { 00:18:27.699 "subsystem": "vmd", 00:18:27.699 "config": [] 00:18:27.699 }, 00:18:27.699 { 00:18:27.699 "subsystem": "accel", 00:18:27.699 "config": [ 00:18:27.699 { 00:18:27.699 "method": "accel_set_options", 00:18:27.699 "params": { 00:18:27.699 "buf_count": 2048, 00:18:27.699 "large_cache_size": 16, 00:18:27.699 "sequence_count": 2048, 00:18:27.699 "small_cache_size": 128, 00:18:27.699 "task_count": 2048 00:18:27.699 } 00:18:27.699 } 00:18:27.699 ] 00:18:27.699 }, 00:18:27.699 { 00:18:27.699 "subsystem": "bdev", 00:18:27.699 "config": [ 00:18:27.699 { 00:18:27.699 "method": "bdev_set_options", 00:18:27.699 "params": { 00:18:27.699 "bdev_auto_examine": true, 00:18:27.700 "bdev_io_cache_size": 256, 00:18:27.700 "bdev_io_pool_size": 65535, 00:18:27.700 "iobuf_large_cache_size": 16, 00:18:27.700 "iobuf_small_cache_size": 128 00:18:27.700 } 00:18:27.700 }, 00:18:27.700 { 00:18:27.700 "method": "bdev_raid_set_options", 00:18:27.700 "params": { 00:18:27.700 "process_max_bandwidth_mb_sec": 0, 00:18:27.700 "process_window_size_kb": 1024 00:18:27.700 } 00:18:27.700 }, 00:18:27.700 { 00:18:27.700 "method": "bdev_iscsi_set_options", 00:18:27.700 "params": { 00:18:27.700 "timeout_sec": 30 00:18:27.700 } 00:18:27.700 }, 00:18:27.700 { 00:18:27.700 "method": "bdev_nvme_set_options", 00:18:27.700 "params": { 00:18:27.700 "action_on_timeout": "none", 00:18:27.700 "allow_accel_sequence": false, 00:18:27.700 "arbitration_burst": 0, 00:18:27.700 "bdev_retry_count": 3, 00:18:27.700 "ctrlr_loss_timeout_sec": 0, 00:18:27.700 "delay_cmd_submit": true, 00:18:27.700 "dhchap_dhgroups": [ 00:18:27.700 "null", 00:18:27.700 "ffdhe2048", 00:18:27.700 "ffdhe3072", 00:18:27.700 "ffdhe4096", 00:18:27.700 "ffdhe6144", 00:18:27.700 "ffdhe8192" 00:18:27.700 ], 00:18:27.700 "dhchap_digests": [ 00:18:27.700 "sha256", 00:18:27.700 "sha384", 00:18:27.700 "sha512" 00:18:27.700 ], 00:18:27.700 "disable_auto_failback": false, 00:18:27.700 "fast_io_fail_timeout_sec": 0, 00:18:27.700 "generate_uuids": false, 00:18:27.700 "high_priority_weight": 0, 00:18:27.700 "io_path_stat": false, 00:18:27.700 "io_queue_requests": 0, 00:18:27.700 "keep_alive_timeout_ms": 10000, 00:18:27.700 "low_priority_weight": 0, 00:18:27.700 "medium_priority_weight": 0, 00:18:27.700 "nvme_adminq_poll_period_us": 10000, 00:18:27.700 "nvme_error_stat": false, 00:18:27.700 "nvme_ioq_poll_period_us": 0, 00:18:27.700 "rdma_cm_event_timeout_ms": 0, 00:18:27.700 "rdma_max_cq_size": 0, 00:18:27.700 "rdma_srq_size": 0, 00:18:27.700 "reconnect_delay_sec": 0, 00:18:27.700 "timeout_admin_us": 0, 00:18:27.700 "timeout_us": 0, 00:18:27.700 "transport_ack_timeout": 0, 00:18:27.700 "transport_retry_count": 4, 00:18:27.700 "transport_tos": 0 00:18:27.700 } 00:18:27.700 }, 00:18:27.700 { 00:18:27.700 "method": "bdev_nvme_set_hotplug", 00:18:27.700 "params": { 00:18:27.700 "enable": false, 00:18:27.700 "period_us": 100000 00:18:27.700 } 00:18:27.700 }, 00:18:27.700 { 00:18:27.700 "method": "bdev_malloc_create", 00:18:27.700 "params": { 00:18:27.700 "block_size": 4096, 00:18:27.700 "dif_is_head_of_md": false, 00:18:27.700 "dif_pi_format": 0, 00:18:27.700 "dif_type": 0, 00:18:27.700 "md_size": 0, 00:18:27.700 "name": "malloc0", 00:18:27.700 "num_blocks": 8192, 00:18:27.700 "optimal_io_boundary": 0, 00:18:27.700 "physical_block_size": 4096, 00:18:27.700 "uuid": "2717e625-8e1b-411a-a322-18b21a5a0dcf" 00:18:27.700 } 00:18:27.700 }, 00:18:27.700 { 00:18:27.700 "method": "bdev_wait_for_examine" 00:18:27.700 } 00:18:27.700 ] 00:18:27.700 }, 00:18:27.700 { 00:18:27.700 "subsystem": "nbd", 00:18:27.700 "config": [] 00:18:27.700 }, 00:18:27.700 { 00:18:27.700 "subsystem": "scheduler", 00:18:27.700 "config": [ 00:18:27.700 { 00:18:27.700 "method": "framework_set_scheduler", 00:18:27.700 "params": { 00:18:27.700 "name": "static" 00:18:27.700 } 00:18:27.700 } 00:18:27.700 ] 00:18:27.700 }, 00:18:27.700 { 00:18:27.700 "subsystem": "nvmf", 00:18:27.700 "config": [ 00:18:27.700 { 00:18:27.700 "method": "nvmf_set_config", 00:18:27.700 "params": { 00:18:27.700 "admin_cmd_passthru": { 00:18:27.700 "identify_ctrlr": false 00:18:27.700 }, 00:18:27.700 "dhchap_dhgroups": [ 00:18:27.700 "null", 00:18:27.700 "ffdhe2048", 00:18:27.700 "ffdhe3072", 00:18:27.700 "ffdhe4096", 00:18:27.700 "ffdhe6144", 00:18:27.700 "ffdhe8192" 00:18:27.700 ], 00:18:27.700 "dhchap_digests": [ 00:18:27.700 "sha256", 00:18:27.700 "sha384", 00:18:27.700 "sha512" 00:18:27.700 ], 00:18:27.700 "discovery_filter": "match_any" 00:18:27.700 } 00:18:27.700 }, 00:18:27.700 { 00:18:27.700 "method": "nvmf_set_max_subsystems", 00:18:27.700 "params": { 00:18:27.700 "max_subsystems": 1024 00:18:27.700 } 00:18:27.700 }, 00:18:27.700 { 00:18:27.700 "method": "nvmf_set_crdt", 00:18:27.700 "params": { 00:18:27.700 "crdt1": 0, 00:18:27.700 "crdt2": 0, 00:18:27.700 "crdt3": 0 00:18:27.700 } 00:18:27.700 }, 00:18:27.700 { 00:18:27.700 "method": "nvmf_create_transport", 00:18:27.700 "params": { 00:18:27.700 "abort_timeout_sec": 1, 00:18:27.700 "ack_timeout": 0, 00:18:27.700 "buf_cache_size": 4294967295, 00:18:27.700 "c2h_success": false, 00:18:27.700 "data_wr_pool_size": 0, 00:18:27.700 "dif_insert_or_strip": false, 00:18:27.700 "in_capsule_data_size": 4096, 00:18:27.700 "io_unit_size": 131072, 00:18:27.700 "max_aq_depth": 128, 00:18:27.700 "max_io_qpairs_per_ctrlr": 127, 00:18:27.700 "max_io_size": 131072, 00:18:27.700 "max_queue_depth": 128, 00:18:27.700 "num_shared_buffers": 511, 00:18:27.700 "sock_priority": 0, 00:18:27.700 "trtype": "TCP", 00:18:27.700 "zcopy": false 00:18:27.700 } 00:18:27.700 }, 00:18:27.700 { 00:18:27.700 "method": "nvmf_create_subsystem", 00:18:27.700 "params": { 00:18:27.700 "allow_any_host": false, 00:18:27.700 "ana_reporting": false, 00:18:27.700 "max_cntlid": 65519, 00:18:27.700 "max_namespaces": 10, 00:18:27.700 "min_cntlid": 1, 00:18:27.700 "model_number": "SPDK bdev Controller", 00:18:27.700 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:27.700 "serial_number": "SPDK00000000000001" 00:18:27.700 } 00:18:27.700 }, 00:18:27.700 { 00:18:27.700 "method": "nvmf_subsystem_add_host", 00:18:27.700 "params": { 00:18:27.700 "host": "nqn.2016-06.io.spdk:host1", 00:18:27.700 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:27.700 "psk": "key0" 00:18:27.700 } 00:18:27.700 }, 00:18:27.700 { 00:18:27.700 "method": "nvmf_subsystem_add_ns", 00:18:27.700 "params": { 00:18:27.700 "namespace": { 00:18:27.700 "bdev_name": "malloc0", 00:18:27.700 "nguid": "2717E6258E1B411AA32218B21A5A0DCF", 00:18:27.700 "no_auto_visible": false, 00:18:27.700 "nsid": 1, 00:18:27.700 "uuid": "2717e625-8e1b-411a-a322-18b21a5a0dcf" 00:18:27.700 }, 00:18:27.700 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:18:27.700 } 00:18:27.700 }, 00:18:27.700 { 00:18:27.700 "method": "nvmf_subsystem_add_listener", 00:18:27.700 "params": { 00:18:27.700 "listen_address": { 00:18:27.700 "adrfam": "IPv4", 00:18:27.700 "traddr": "10.0.0.3", 00:18:27.700 "trsvcid": "4420", 00:18:27.700 "trtype": "TCP" 00:18:27.701 }, 00:18:27.701 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:27.701 "secure_channel": true 00:18:27.701 } 00:18:27.701 } 00:18:27.701 ] 00:18:27.701 } 00:18:27.701 ] 00:18:27.701 }' 00:18:27.701 11:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:28.268 11:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:18:28.268 "subsystems": [ 00:18:28.268 { 00:18:28.268 "subsystem": "keyring", 00:18:28.268 "config": [ 00:18:28.268 { 00:18:28.268 "method": "keyring_file_add_key", 00:18:28.268 "params": { 00:18:28.268 "name": "key0", 00:18:28.268 "path": "/tmp/tmp.KsWL0pcwzf" 00:18:28.268 } 00:18:28.268 } 00:18:28.268 ] 00:18:28.268 }, 00:18:28.268 { 00:18:28.268 "subsystem": "iobuf", 00:18:28.268 "config": [ 00:18:28.268 { 00:18:28.268 "method": "iobuf_set_options", 00:18:28.268 "params": { 00:18:28.268 "enable_numa": false, 00:18:28.268 "large_bufsize": 135168, 00:18:28.268 "large_pool_count": 1024, 00:18:28.268 "small_bufsize": 8192, 00:18:28.268 "small_pool_count": 8192 00:18:28.268 } 00:18:28.268 } 00:18:28.268 ] 00:18:28.268 }, 00:18:28.268 { 00:18:28.268 "subsystem": "sock", 00:18:28.268 "config": [ 00:18:28.268 { 00:18:28.268 "method": "sock_set_default_impl", 00:18:28.268 "params": { 00:18:28.268 "impl_name": "posix" 00:18:28.268 } 00:18:28.268 }, 00:18:28.268 { 00:18:28.268 "method": "sock_impl_set_options", 00:18:28.268 "params": { 00:18:28.268 "enable_ktls": false, 00:18:28.268 "enable_placement_id": 0, 00:18:28.268 "enable_quickack": false, 00:18:28.268 "enable_recv_pipe": true, 00:18:28.268 "enable_zerocopy_send_client": false, 00:18:28.268 "enable_zerocopy_send_server": true, 00:18:28.268 "impl_name": "ssl", 00:18:28.268 "recv_buf_size": 4096, 00:18:28.268 "send_buf_size": 4096, 00:18:28.268 "tls_version": 0, 00:18:28.268 "zerocopy_threshold": 0 00:18:28.268 } 00:18:28.268 }, 00:18:28.268 { 00:18:28.268 "method": "sock_impl_set_options", 00:18:28.268 "params": { 00:18:28.268 "enable_ktls": false, 00:18:28.268 "enable_placement_id": 0, 00:18:28.268 "enable_quickack": false, 00:18:28.268 "enable_recv_pipe": true, 00:18:28.268 "enable_zerocopy_send_client": false, 00:18:28.268 "enable_zerocopy_send_server": true, 00:18:28.268 "impl_name": "posix", 00:18:28.268 "recv_buf_size": 2097152, 00:18:28.268 "send_buf_size": 2097152, 00:18:28.268 "tls_version": 0, 00:18:28.268 "zerocopy_threshold": 0 00:18:28.268 } 00:18:28.268 } 00:18:28.268 ] 00:18:28.268 }, 00:18:28.268 { 00:18:28.268 "subsystem": "vmd", 00:18:28.268 "config": [] 00:18:28.268 }, 00:18:28.268 { 00:18:28.268 "subsystem": "accel", 00:18:28.268 "config": [ 00:18:28.268 { 00:18:28.268 "method": "accel_set_options", 00:18:28.268 "params": { 00:18:28.268 "buf_count": 2048, 00:18:28.268 "large_cache_size": 16, 00:18:28.268 "sequence_count": 2048, 00:18:28.268 "small_cache_size": 128, 00:18:28.268 "task_count": 2048 00:18:28.268 } 00:18:28.268 } 00:18:28.268 ] 00:18:28.268 }, 00:18:28.268 { 00:18:28.268 "subsystem": "bdev", 00:18:28.268 "config": [ 00:18:28.268 { 00:18:28.268 "method": "bdev_set_options", 00:18:28.268 "params": { 00:18:28.268 "bdev_auto_examine": true, 00:18:28.268 "bdev_io_cache_size": 256, 00:18:28.268 "bdev_io_pool_size": 65535, 00:18:28.268 "iobuf_large_cache_size": 16, 00:18:28.268 "iobuf_small_cache_size": 128 00:18:28.268 } 00:18:28.268 }, 00:18:28.268 { 00:18:28.268 "method": "bdev_raid_set_options", 00:18:28.268 "params": { 00:18:28.268 "process_max_bandwidth_mb_sec": 0, 00:18:28.268 "process_window_size_kb": 1024 00:18:28.268 } 00:18:28.268 }, 00:18:28.268 { 00:18:28.268 "method": "bdev_iscsi_set_options", 00:18:28.268 "params": { 00:18:28.268 "timeout_sec": 30 00:18:28.268 } 00:18:28.268 }, 00:18:28.268 { 00:18:28.268 "method": "bdev_nvme_set_options", 00:18:28.268 "params": { 00:18:28.268 "action_on_timeout": "none", 00:18:28.268 "allow_accel_sequence": false, 00:18:28.269 "arbitration_burst": 0, 00:18:28.269 "bdev_retry_count": 3, 00:18:28.269 "ctrlr_loss_timeout_sec": 0, 00:18:28.269 "delay_cmd_submit": true, 00:18:28.269 "dhchap_dhgroups": [ 00:18:28.269 "null", 00:18:28.269 "ffdhe2048", 00:18:28.269 "ffdhe3072", 00:18:28.269 "ffdhe4096", 00:18:28.269 "ffdhe6144", 00:18:28.269 "ffdhe8192" 00:18:28.269 ], 00:18:28.269 "dhchap_digests": [ 00:18:28.269 "sha256", 00:18:28.269 "sha384", 00:18:28.269 "sha512" 00:18:28.269 ], 00:18:28.269 "disable_auto_failback": false, 00:18:28.269 "fast_io_fail_timeout_sec": 0, 00:18:28.269 "generate_uuids": false, 00:18:28.269 "high_priority_weight": 0, 00:18:28.269 "io_path_stat": false, 00:18:28.269 "io_queue_requests": 512, 00:18:28.269 "keep_alive_timeout_ms": 10000, 00:18:28.269 "low_priority_weight": 0, 00:18:28.269 "medium_priority_weight": 0, 00:18:28.269 "nvme_adminq_poll_period_us": 10000, 00:18:28.269 "nvme_error_stat": false, 00:18:28.269 "nvme_ioq_poll_period_us": 0, 00:18:28.269 "rdma_cm_event_timeout_ms": 0, 00:18:28.269 "rdma_max_cq_size": 0, 00:18:28.269 "rdma_srq_size": 0, 00:18:28.269 "reconnect_delay_sec": 0, 00:18:28.269 "timeout_admin_us": 0, 00:18:28.269 "timeout_us": 0, 00:18:28.269 "transport_ack_timeout": 0, 00:18:28.269 "transport_retry_count": 4, 00:18:28.269 "transport_tos": 0 00:18:28.269 } 00:18:28.269 }, 00:18:28.269 { 00:18:28.269 "method": "bdev_nvme_attach_controller", 00:18:28.269 "params": { 00:18:28.269 "adrfam": "IPv4", 00:18:28.269 "ctrlr_loss_timeout_sec": 0, 00:18:28.269 "ddgst": false, 00:18:28.269 "fast_io_fail_timeout_sec": 0, 00:18:28.269 "hdgst": false, 00:18:28.269 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:28.269 "multipath": "multipath", 00:18:28.269 "name": "TLSTEST", 00:18:28.269 "prchk_guard": false, 00:18:28.269 "prchk_reftag": false, 00:18:28.269 "psk": "key0", 00:18:28.269 "reconnect_delay_sec": 0, 00:18:28.269 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:28.269 "traddr": "10.0.0.3", 00:18:28.269 "trsvcid": "4420", 00:18:28.269 "trtype": "TCP" 00:18:28.269 } 00:18:28.269 }, 00:18:28.269 { 00:18:28.269 "method": "bdev_nvme_set_hotplug", 00:18:28.269 "params": { 00:18:28.269 "enable": false, 00:18:28.269 "period_us": 100000 00:18:28.269 } 00:18:28.269 }, 00:18:28.269 { 00:18:28.269 "method": "bdev_wait_for_examine" 00:18:28.269 } 00:18:28.269 ] 00:18:28.269 }, 00:18:28.269 { 00:18:28.269 "subsystem": "nbd", 00:18:28.269 "config": [] 00:18:28.269 } 00:18:28.269 ] 00:18:28.269 }' 00:18:28.269 11:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 83661 00:18:28.269 11:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83661 ']' 00:18:28.269 11:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83661 00:18:28.269 11:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:28.269 11:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:28.269 11:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83661 00:18:28.269 11:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:28.269 11:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:28.269 killing process with pid 83661 00:18:28.269 11:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83661' 00:18:28.269 11:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83661 00:18:28.269 Received shutdown signal, test time was about 10.000000 seconds 00:18:28.269 00:18:28.269 Latency(us) 00:18:28.269 [2024-11-20T11:31:13.858Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:28.269 [2024-11-20T11:31:13.858Z] =================================================================================================================== 00:18:28.269 [2024-11-20T11:31:13.858Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:28.269 11:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83661 00:18:28.269 11:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 83558 00:18:28.269 11:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83558 ']' 00:18:28.269 11:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83558 00:18:28.269 11:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:28.269 11:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:28.269 11:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83558 00:18:28.269 11:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:28.269 11:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:28.269 killing process with pid 83558 00:18:28.269 11:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83558' 00:18:28.269 11:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83558 00:18:28.269 11:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83558 00:18:28.531 11:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:28.531 11:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:18:28.531 "subsystems": [ 00:18:28.531 { 00:18:28.531 "subsystem": "keyring", 00:18:28.531 "config": [ 00:18:28.531 { 00:18:28.531 "method": "keyring_file_add_key", 00:18:28.531 "params": { 00:18:28.531 "name": "key0", 00:18:28.531 "path": "/tmp/tmp.KsWL0pcwzf" 00:18:28.531 } 00:18:28.531 } 00:18:28.531 ] 00:18:28.531 }, 00:18:28.531 { 00:18:28.531 "subsystem": "iobuf", 00:18:28.531 "config": [ 00:18:28.531 { 00:18:28.531 "method": "iobuf_set_options", 00:18:28.531 "params": { 00:18:28.531 "enable_numa": false, 00:18:28.531 "large_bufsize": 135168, 00:18:28.531 "large_pool_count": 1024, 00:18:28.531 "small_bufsize": 8192, 00:18:28.531 "small_pool_count": 8192 00:18:28.531 } 00:18:28.531 } 00:18:28.531 ] 00:18:28.531 }, 00:18:28.531 { 00:18:28.531 "subsystem": "sock", 00:18:28.531 "config": [ 00:18:28.531 { 00:18:28.531 "method": "sock_set_default_impl", 00:18:28.531 "params": { 00:18:28.531 "impl_name": "posix" 00:18:28.531 } 00:18:28.531 }, 00:18:28.531 { 00:18:28.531 "method": "sock_impl_set_options", 00:18:28.531 "params": { 00:18:28.531 "enable_ktls": false, 00:18:28.531 "enable_placement_id": 0, 00:18:28.531 "enable_quickack": false, 00:18:28.531 "enable_recv_pipe": true, 00:18:28.531 "enable_zerocopy_send_client": false, 00:18:28.532 "enable_zerocopy_send_server": true, 00:18:28.532 "impl_name": "ssl", 00:18:28.532 "recv_buf_size": 4096, 00:18:28.532 "send_buf_size": 4096, 00:18:28.532 "tls_version": 0, 00:18:28.532 "zerocopy_threshold": 0 00:18:28.532 } 00:18:28.532 }, 00:18:28.532 { 00:18:28.532 "method": "sock_impl_set_options", 00:18:28.532 "params": { 00:18:28.532 "enable_ktls": false, 00:18:28.532 "enable_placement_id": 0, 00:18:28.532 "enable_quickack": false, 00:18:28.532 "enable_recv_pipe": true, 00:18:28.532 "enable_zerocopy_send_client": false, 00:18:28.532 "enable_zerocopy_send_server": true, 00:18:28.532 "impl_name": "posix", 00:18:28.532 "recv_buf_size": 2097152, 00:18:28.532 "send_buf_size": 2097152, 00:18:28.532 "tls_version": 0, 00:18:28.532 "zerocopy_threshold": 0 00:18:28.532 } 00:18:28.532 } 00:18:28.532 ] 00:18:28.532 }, 00:18:28.532 { 00:18:28.532 "subsystem": "vmd", 00:18:28.532 "config": [] 00:18:28.532 }, 00:18:28.532 { 00:18:28.532 "subsystem": "accel", 00:18:28.532 "config": [ 00:18:28.532 { 00:18:28.532 "method": "accel_set_options", 00:18:28.532 "params": { 00:18:28.532 "buf_count": 2048, 00:18:28.532 "large_cache_size": 16, 00:18:28.532 "sequence_count": 2048, 00:18:28.532 "small_cache_size": 128, 00:18:28.532 "task_count": 2048 00:18:28.532 } 00:18:28.532 } 00:18:28.532 ] 00:18:28.532 }, 00:18:28.532 { 00:18:28.532 "subsystem": "bdev", 00:18:28.532 "config": [ 00:18:28.532 { 00:18:28.532 "method": "bdev_set_options", 00:18:28.532 "params": { 00:18:28.532 "bdev_auto_examine": true, 00:18:28.532 "bdev_io_cache_size": 256, 00:18:28.532 "bdev_io_pool_size": 65535, 00:18:28.532 "iobuf_large_cache_size": 16, 00:18:28.532 "iobuf_small_cache_size": 128 00:18:28.532 } 00:18:28.532 }, 00:18:28.532 { 00:18:28.532 "method": "bdev_raid_set_options", 00:18:28.532 "params": { 00:18:28.532 "process_max_bandwidth_mb_sec": 0, 00:18:28.532 "process_window_size_kb": 1024 00:18:28.532 } 00:18:28.532 }, 00:18:28.532 { 00:18:28.532 "method": "bdev_iscsi_set_options", 00:18:28.532 "params": { 00:18:28.532 "timeout_sec": 30 00:18:28.532 } 00:18:28.532 }, 00:18:28.532 { 00:18:28.532 "method": "bdev_nvme_set_options", 00:18:28.532 "params": { 00:18:28.532 "action_on_timeout": "none", 00:18:28.532 "allow_accel_sequence": false, 00:18:28.532 "arbitration_burst": 0, 00:18:28.532 "bdev_retry_count": 3, 00:18:28.532 "ctrlr_loss_timeout_sec": 0, 00:18:28.532 "delay_cmd_submit": true, 00:18:28.532 "dhchap_dhgroups": [ 00:18:28.532 "null", 00:18:28.532 "ffdhe2048", 00:18:28.532 "ffdhe3072", 00:18:28.532 "ffdhe4096", 00:18:28.532 "ffdhe6144", 00:18:28.532 "ffdhe8192" 00:18:28.532 ], 00:18:28.532 "dhchap_digests": [ 00:18:28.532 "sha256", 00:18:28.532 "sha384", 00:18:28.532 "sha512" 00:18:28.532 ], 00:18:28.532 "disable_auto_failback": false, 00:18:28.532 "fast_io_fail_timeout_sec": 0, 00:18:28.532 "generate_uuids": false, 00:18:28.532 "high_priority_weight": 0, 00:18:28.532 "io_path_stat": false, 00:18:28.532 "io_queue_requests": 0, 00:18:28.532 "keep_alive_timeout_ms": 10000, 00:18:28.532 "low_priority_weight": 0, 00:18:28.532 "medium_priority_weight": 0, 00:18:28.532 "nvme_adminq_poll_period_us": 10000, 00:18:28.532 "nvme_error_stat": false, 00:18:28.532 "nvme_ioq_poll_period_us": 0, 00:18:28.532 "rdma_cm_event_timeout_ms": 0, 00:18:28.532 "rdma_max_cq_size": 0, 00:18:28.532 "rdma_srq_size": 0, 00:18:28.532 "reconnect_delay_sec": 0, 00:18:28.532 "timeout_admin_us": 0, 00:18:28.532 "timeout_us": 0, 00:18:28.532 "transport_ack_timeout": 0, 00:18:28.532 "transport_retry_count": 4, 00:18:28.532 "transport_tos": 0 00:18:28.532 } 00:18:28.532 }, 00:18:28.532 { 00:18:28.532 "method": "bdev_nvme_set_hotplug", 00:18:28.532 "params": { 00:18:28.532 "enable": false, 00:18:28.532 "period_us": 100000 00:18:28.532 } 00:18:28.532 }, 00:18:28.532 { 00:18:28.532 "method": "bdev_malloc_create", 00:18:28.532 "params": { 00:18:28.532 "block_size": 4096, 00:18:28.532 "dif_is_head_of_md": false, 00:18:28.532 "dif_pi_format": 0, 00:18:28.532 "dif_type": 0, 00:18:28.532 "md_size": 0, 00:18:28.532 "name": "malloc0", 00:18:28.532 "num_blocks": 8192, 00:18:28.532 "optimal_io_boundary": 0, 00:18:28.532 "physical_block_size": 4096, 00:18:28.532 "uuid": "2717e625-8e1b-411a-a322-18b21a5a0dcf" 00:18:28.532 } 00:18:28.532 }, 00:18:28.532 { 00:18:28.532 "method": "bdev_wait_for_examine" 00:18:28.532 } 00:18:28.532 ] 00:18:28.532 }, 00:18:28.532 { 00:18:28.532 "subsystem": "nbd", 00:18:28.532 "config": [] 00:18:28.532 }, 00:18:28.532 { 00:18:28.532 "subsystem": "scheduler", 00:18:28.532 "config": [ 00:18:28.532 { 00:18:28.532 "method": "framework_set_scheduler", 00:18:28.532 "params": { 00:18:28.532 "name": "static" 00:18:28.532 } 00:18:28.532 } 00:18:28.532 ] 00:18:28.532 }, 00:18:28.532 { 00:18:28.532 "subsystem": "nvmf", 00:18:28.532 "config": [ 00:18:28.532 { 00:18:28.532 "method": "nvmf_set_config", 00:18:28.532 "params": { 00:18:28.532 "admin_cmd_passthru": { 00:18:28.532 "identify_ctrlr": false 00:18:28.532 }, 00:18:28.532 "dhchap_dhgroups": [ 00:18:28.532 "null", 00:18:28.532 "ffdhe2048", 00:18:28.532 "ffdhe3072", 00:18:28.532 "ffdhe4096", 00:18:28.532 "ffdhe6144", 00:18:28.532 "ffdhe8192" 00:18:28.532 ], 00:18:28.532 "dhchap_digests": [ 00:18:28.532 "sha256", 00:18:28.532 "sha384", 00:18:28.532 "sha512" 00:18:28.533 ], 00:18:28.533 "discovery_filter": "match_any" 00:18:28.533 } 00:18:28.533 }, 00:18:28.533 { 00:18:28.533 "method": "nvmf_set_max_subsystems", 00:18:28.533 "params": { 00:18:28.533 "max_subsystems": 1024 00:18:28.533 } 00:18:28.533 }, 00:18:28.533 { 00:18:28.533 "method": "nvmf_set_crdt", 00:18:28.533 "params": { 00:18:28.533 "crdt1": 0, 00:18:28.533 "crdt2": 0, 00:18:28.533 "crdt3": 0 00:18:28.533 } 00:18:28.533 }, 00:18:28.533 { 00:18:28.533 "method": "nvmf_create_transport", 00:18:28.533 "params": { 00:18:28.533 "abort_timeout_sec": 1, 00:18:28.533 "ack_timeout": 0, 00:18:28.533 "buf_cache_size": 4294967295, 00:18:28.533 "c2h_success": false, 00:18:28.533 "data_wr_pool_size": 0, 00:18:28.533 "dif_insert_or_strip": false, 00:18:28.533 "in_capsule_data_size": 4096, 00:18:28.533 "io_unit_size": 131072, 00:18:28.533 "max_aq_depth": 128, 00:18:28.533 "max_io_qpairs_per_ctrlr": 127, 00:18:28.533 "max_io_size": 131072, 00:18:28.533 "max_queue_depth": 128, 00:18:28.533 "num_shared_buffers": 511, 00:18:28.533 "sock_priority": 0, 00:18:28.533 "trtype": "TCP", 00:18:28.533 "zcopy": false 00:18:28.533 } 00:18:28.533 }, 00:18:28.533 { 00:18:28.533 "method": "nvmf_create_subsystem", 00:18:28.533 "params": { 00:18:28.533 "allow_any_host": false, 00:18:28.533 "ana_reporting": false, 00:18:28.533 "max_cntlid": 65519, 00:18:28.533 "max_namespaces": 10, 00:18:28.533 "min_cntlid": 1, 00:18:28.533 "model_number": "SPDK bdev Controller", 00:18:28.533 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:28.533 "serial_number": "SPDK00000000000001" 00:18:28.533 } 00:18:28.533 }, 00:18:28.533 { 00:18:28.533 "method": "nvmf_subsystem_add_host", 00:18:28.533 "params": { 00:18:28.533 "host": "nqn.2016-06.io.spdk:host1", 00:18:28.533 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:28.533 "psk": "key0" 00:18:28.533 } 00:18:28.533 }, 00:18:28.533 { 00:18:28.533 "method": "nvmf_subsystem_add_ns", 00:18:28.533 "params": { 00:18:28.533 "namespace": { 00:18:28.533 "bdev_name": "malloc0", 00:18:28.533 "nguid": "2717E6258E1B411AA32218B21A5A0DCF", 00:18:28.533 "no_auto_visible": false, 00:18:28.533 "nsid": 1, 00:18:28.533 "uuid": "2717e625-8e1b-411a-a322-18b21a5a0dcf" 00:18:28.533 }, 00:18:28.533 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:18:28.533 } 00:18:28.533 }, 00:18:28.533 { 00:18:28.533 "method": "nvmf_subsystem_add_listener", 00:18:28.533 "params": { 00:18:28.533 "listen_address": { 00:18:28.533 "adrfam": "IPv4", 00:18:28.533 "traddr": "10.0.0.3", 00:18:28.533 "trsvcid": "4420", 00:18:28.533 "trtype": "TCP" 00:18:28.533 }, 00:18:28.533 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:28.533 "secure_channel": true 00:18:28.533 } 00:18:28.533 } 00:18:28.533 ] 00:18:28.533 } 00:18:28.533 ] 00:18:28.533 }' 00:18:28.533 11:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:28.533 11:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:28.533 11:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:28.533 11:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:28.533 11:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=83733 00:18:28.533 11:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 83733 00:18:28.533 11:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83733 ']' 00:18:28.533 11:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:28.533 11:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:28.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:28.533 11:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:28.533 11:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:28.533 11:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:28.533 [2024-11-20 11:31:13.969590] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:18:28.533 [2024-11-20 11:31:13.969701] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:28.533 [2024-11-20 11:31:14.113309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.794 [2024-11-20 11:31:14.145799] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:28.794 [2024-11-20 11:31:14.145859] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:28.794 [2024-11-20 11:31:14.145878] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:28.794 [2024-11-20 11:31:14.145889] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:28.794 [2024-11-20 11:31:14.145898] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:28.794 [2024-11-20 11:31:14.146252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:28.794 [2024-11-20 11:31:14.341839] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:28.794 [2024-11-20 11:31:14.373828] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:28.794 [2024-11-20 11:31:14.374076] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:29.730 11:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:29.730 11:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:29.730 11:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:29.730 11:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:29.730 11:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:29.730 11:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:29.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:29.730 11:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=83777 00:18:29.730 11:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 83777 /var/tmp/bdevperf.sock 00:18:29.730 11:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83777 ']' 00:18:29.730 11:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:29.730 11:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:29.730 11:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:29.730 11:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:29.730 11:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:29.730 11:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:29.730 11:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:18:29.730 "subsystems": [ 00:18:29.730 { 00:18:29.730 "subsystem": "keyring", 00:18:29.730 "config": [ 00:18:29.730 { 00:18:29.730 "method": "keyring_file_add_key", 00:18:29.730 "params": { 00:18:29.730 "name": "key0", 00:18:29.730 "path": "/tmp/tmp.KsWL0pcwzf" 00:18:29.730 } 00:18:29.730 } 00:18:29.730 ] 00:18:29.730 }, 00:18:29.730 { 00:18:29.730 "subsystem": "iobuf", 00:18:29.730 "config": [ 00:18:29.730 { 00:18:29.730 "method": "iobuf_set_options", 00:18:29.730 "params": { 00:18:29.730 "enable_numa": false, 00:18:29.730 "large_bufsize": 135168, 00:18:29.730 "large_pool_count": 1024, 00:18:29.730 "small_bufsize": 8192, 00:18:29.730 "small_pool_count": 8192 00:18:29.730 } 00:18:29.730 } 00:18:29.730 ] 00:18:29.730 }, 00:18:29.730 { 00:18:29.730 "subsystem": "sock", 00:18:29.730 "config": [ 00:18:29.730 { 00:18:29.730 "method": "sock_set_default_impl", 00:18:29.730 "params": { 00:18:29.730 "impl_name": "posix" 00:18:29.730 } 00:18:29.730 }, 00:18:29.730 { 00:18:29.730 "method": "sock_impl_set_options", 00:18:29.730 "params": { 00:18:29.730 "enable_ktls": false, 00:18:29.730 "enable_placement_id": 0, 00:18:29.730 "enable_quickack": false, 00:18:29.730 "enable_recv_pipe": true, 00:18:29.730 "enable_zerocopy_send_client": false, 00:18:29.730 "enable_zerocopy_send_server": true, 00:18:29.730 "impl_name": "ssl", 00:18:29.730 "recv_buf_size": 4096, 00:18:29.730 "send_buf_size": 4096, 00:18:29.730 "tls_version": 0, 00:18:29.730 "zerocopy_threshold": 0 00:18:29.730 } 00:18:29.730 }, 00:18:29.730 { 00:18:29.730 "method": "sock_impl_set_options", 00:18:29.730 "params": { 00:18:29.730 "enable_ktls": false, 00:18:29.730 "enable_placement_id": 0, 00:18:29.730 "enable_quickack": false, 00:18:29.730 "enable_recv_pipe": true, 00:18:29.730 "enable_zerocopy_send_client": false, 00:18:29.730 "enable_zerocopy_send_server": true, 00:18:29.730 "impl_name": "posix", 00:18:29.730 "recv_buf_size": 2097152, 00:18:29.730 "send_buf_size": 2097152, 00:18:29.730 "tls_version": 0, 00:18:29.730 "zerocopy_threshold": 0 00:18:29.730 } 00:18:29.730 } 00:18:29.730 ] 00:18:29.730 }, 00:18:29.730 { 00:18:29.730 "subsystem": "vmd", 00:18:29.730 "config": [] 00:18:29.730 }, 00:18:29.730 { 00:18:29.730 "subsystem": "accel", 00:18:29.730 "config": [ 00:18:29.730 { 00:18:29.730 "method": "accel_set_options", 00:18:29.730 "params": { 00:18:29.730 "buf_count": 2048, 00:18:29.730 "large_cache_size": 16, 00:18:29.730 "sequence_count": 2048, 00:18:29.730 "small_cache_size": 128, 00:18:29.730 "task_count": 2048 00:18:29.730 } 00:18:29.730 } 00:18:29.730 ] 00:18:29.730 }, 00:18:29.730 { 00:18:29.730 "subsystem": "bdev", 00:18:29.730 "config": [ 00:18:29.730 { 00:18:29.731 "method": "bdev_set_options", 00:18:29.731 "params": { 00:18:29.731 "bdev_auto_examine": true, 00:18:29.731 "bdev_io_cache_size": 256, 00:18:29.731 "bdev_io_pool_size": 65535, 00:18:29.731 "iobuf_large_cache_size": 16, 00:18:29.731 "iobuf_small_cache_size": 128 00:18:29.731 } 00:18:29.731 }, 00:18:29.731 { 00:18:29.731 "method": "bdev_raid_set_options", 00:18:29.731 "params": { 00:18:29.731 "process_max_bandwidth_mb_sec": 0, 00:18:29.731 "process_window_size_kb": 1024 00:18:29.731 } 00:18:29.731 }, 00:18:29.731 { 00:18:29.731 "method": "bdev_iscsi_set_options", 00:18:29.731 "params": { 00:18:29.731 "timeout_sec": 30 00:18:29.731 } 00:18:29.731 }, 00:18:29.731 { 00:18:29.731 "method": "bdev_nvme_set_options", 00:18:29.731 "params": { 00:18:29.731 "action_on_timeout": "none", 00:18:29.731 "allow_accel_sequence": false, 00:18:29.731 "arbitration_burst": 0, 00:18:29.731 "bdev_retry_count": 3, 00:18:29.731 "ctrlr_loss_timeout_sec": 0, 00:18:29.731 "delay_cmd_submit": true, 00:18:29.731 "dhchap_dhgroups": [ 00:18:29.731 "null", 00:18:29.731 "ffdhe2048", 00:18:29.731 "ffdhe3072", 00:18:29.731 "ffdhe4096", 00:18:29.731 "ffdhe6144", 00:18:29.731 "ffdhe8192" 00:18:29.731 ], 00:18:29.731 "dhchap_digests": [ 00:18:29.731 "sha256", 00:18:29.731 "sha384", 00:18:29.731 "sha512" 00:18:29.731 ], 00:18:29.731 "disable_auto_failback": false, 00:18:29.731 "fast_io_fail_timeout_sec": 0, 00:18:29.731 "generate_uuids": false, 00:18:29.731 "high_priority_weight": 0, 00:18:29.731 "io_path_stat": false, 00:18:29.731 "io_queue_requests": 512, 00:18:29.731 "keep_alive_timeout_ms": 10000, 00:18:29.731 "low_priority_weight": 0, 00:18:29.731 "medium_priority_weight": 0, 00:18:29.731 "nvme_adminq_poll_period_us": 10000, 00:18:29.731 "nvme_error_stat": false, 00:18:29.731 "nvme_ioq_poll_period_us": 0, 00:18:29.731 "rdma_cm_event_timeout_ms": 0, 00:18:29.731 "rdma_max_cq_size": 0, 00:18:29.731 "rdma_srq_size": 0, 00:18:29.731 "reconnect_delay_sec": 0, 00:18:29.731 "timeout_admin_us": 0, 00:18:29.731 "timeout_us": 0, 00:18:29.731 "transport_ack_timeout": 0, 00:18:29.731 "transport_retry_count": 4, 00:18:29.731 "transport_tos": 0 00:18:29.731 } 00:18:29.731 }, 00:18:29.731 { 00:18:29.731 "method": "bdev_nvme_attach_controller", 00:18:29.731 "params": { 00:18:29.731 "adrfam": "IPv4", 00:18:29.731 "ctrlr_loss_timeout_sec": 0, 00:18:29.731 "ddgst": false, 00:18:29.731 "fast_io_fail_timeout_sec": 0, 00:18:29.731 "hdgst": false, 00:18:29.731 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:29.731 "multipath": "multipath", 00:18:29.731 "name": "TLSTEST", 00:18:29.731 "prchk_guard": false, 00:18:29.731 "prchk_reftag": false, 00:18:29.731 "psk": "key0", 00:18:29.731 "reconnect_delay_sec": 0, 00:18:29.731 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:29.731 "traddr": "10.0.0.3", 00:18:29.731 "trsvcid": "4420", 00:18:29.731 "trtype": "TCP" 00:18:29.731 } 00:18:29.731 }, 00:18:29.731 { 00:18:29.731 "method": "bdev_nvme_set_hotplug", 00:18:29.731 "params": { 00:18:29.731 "enable": false, 00:18:29.731 "period_us": 100000 00:18:29.731 } 00:18:29.731 }, 00:18:29.731 { 00:18:29.731 "method": "bdev_wait_for_examine" 00:18:29.731 } 00:18:29.731 ] 00:18:29.731 }, 00:18:29.731 { 00:18:29.731 "subsystem": "nbd", 00:18:29.731 "config": [] 00:18:29.731 } 00:18:29.731 ] 00:18:29.731 }' 00:18:29.731 [2024-11-20 11:31:15.184711] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:18:29.731 [2024-11-20 11:31:15.184817] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83777 ] 00:18:29.989 [2024-11-20 11:31:15.332344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:29.989 [2024-11-20 11:31:15.365809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:29.989 [2024-11-20 11:31:15.501207] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:30.924 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:30.924 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:30.924 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:30.924 Running I/O for 10 seconds... 00:18:33.234 3596.00 IOPS, 14.05 MiB/s [2024-11-20T11:31:19.758Z] 3801.00 IOPS, 14.85 MiB/s [2024-11-20T11:31:20.693Z] 3869.00 IOPS, 15.11 MiB/s [2024-11-20T11:31:21.628Z] 3897.00 IOPS, 15.22 MiB/s [2024-11-20T11:31:22.561Z] 3917.80 IOPS, 15.30 MiB/s [2024-11-20T11:31:23.495Z] 3925.83 IOPS, 15.34 MiB/s [2024-11-20T11:31:24.867Z] 3910.14 IOPS, 15.27 MiB/s [2024-11-20T11:31:25.801Z] 3915.38 IOPS, 15.29 MiB/s [2024-11-20T11:31:26.752Z] 3913.44 IOPS, 15.29 MiB/s [2024-11-20T11:31:26.752Z] 3914.20 IOPS, 15.29 MiB/s 00:18:41.164 Latency(us) 00:18:41.164 [2024-11-20T11:31:26.753Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:41.164 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:41.164 Verification LBA range: start 0x0 length 0x2000 00:18:41.164 TLSTESTn1 : 10.02 3920.01 15.31 0.00 0.00 32593.40 6106.76 30504.03 00:18:41.164 [2024-11-20T11:31:26.753Z] =================================================================================================================== 00:18:41.164 [2024-11-20T11:31:26.753Z] Total : 3920.01 15.31 0.00 0.00 32593.40 6106.76 30504.03 00:18:41.164 { 00:18:41.164 "results": [ 00:18:41.164 { 00:18:41.164 "job": "TLSTESTn1", 00:18:41.164 "core_mask": "0x4", 00:18:41.164 "workload": "verify", 00:18:41.164 "status": "finished", 00:18:41.164 "verify_range": { 00:18:41.164 "start": 0, 00:18:41.164 "length": 8192 00:18:41.164 }, 00:18:41.164 "queue_depth": 128, 00:18:41.164 "io_size": 4096, 00:18:41.164 "runtime": 10.017833, 00:18:41.164 "iops": 3920.0094471528923, 00:18:41.164 "mibps": 15.312536902940986, 00:18:41.164 "io_failed": 0, 00:18:41.164 "io_timeout": 0, 00:18:41.164 "avg_latency_us": 32593.40421899669, 00:18:41.164 "min_latency_us": 6106.763636363637, 00:18:41.164 "max_latency_us": 30504.02909090909 00:18:41.164 } 00:18:41.164 ], 00:18:41.164 "core_count": 1 00:18:41.164 } 00:18:41.164 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:41.164 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 83777 00:18:41.164 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83777 ']' 00:18:41.164 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83777 00:18:41.164 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:41.164 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:41.164 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83777 00:18:41.164 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:41.164 killing process with pid 83777 00:18:41.164 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:41.164 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83777' 00:18:41.164 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83777 00:18:41.164 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83777 00:18:41.164 Received shutdown signal, test time was about 10.000000 seconds 00:18:41.164 00:18:41.164 Latency(us) 00:18:41.164 [2024-11-20T11:31:26.753Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:41.164 [2024-11-20T11:31:26.753Z] =================================================================================================================== 00:18:41.164 [2024-11-20T11:31:26.753Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:41.164 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 83733 00:18:41.164 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83733 ']' 00:18:41.164 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83733 00:18:41.164 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:41.164 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:41.164 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83733 00:18:41.164 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:41.164 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:41.164 killing process with pid 83733 00:18:41.164 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83733' 00:18:41.164 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83733 00:18:41.164 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83733 00:18:41.425 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:18:41.425 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:41.425 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:41.425 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:41.425 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=83934 00:18:41.425 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:41.425 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 83934 00:18:41.425 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83934 ']' 00:18:41.425 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:41.425 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:41.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:41.425 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:41.425 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:41.425 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:41.425 [2024-11-20 11:31:26.908907] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:18:41.425 [2024-11-20 11:31:26.909017] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:41.684 [2024-11-20 11:31:27.057268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:41.684 [2024-11-20 11:31:27.095291] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:41.684 [2024-11-20 11:31:27.095359] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:41.684 [2024-11-20 11:31:27.095373] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:41.684 [2024-11-20 11:31:27.095384] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:41.684 [2024-11-20 11:31:27.095393] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:41.684 [2024-11-20 11:31:27.095763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:41.684 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:41.684 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:41.684 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:41.684 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:41.684 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:41.685 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:41.685 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.KsWL0pcwzf 00:18:41.685 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.KsWL0pcwzf 00:18:41.685 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:41.943 [2024-11-20 11:31:27.522708] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:42.200 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:42.458 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:18:42.715 [2024-11-20 11:31:28.206963] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:42.715 [2024-11-20 11:31:28.207205] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:42.715 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:42.972 malloc0 00:18:42.972 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:43.536 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.KsWL0pcwzf 00:18:43.536 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:44.100 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=84030 00:18:44.100 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:44.100 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 84030 /var/tmp/bdevperf.sock 00:18:44.100 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:44.100 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84030 ']' 00:18:44.100 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:44.100 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:44.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:44.100 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:44.100 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:44.100 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:44.100 [2024-11-20 11:31:29.510243] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:18:44.100 [2024-11-20 11:31:29.510352] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84030 ] 00:18:44.100 [2024-11-20 11:31:29.658202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:44.358 [2024-11-20 11:31:29.693515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:44.358 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:44.358 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:44.358 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.KsWL0pcwzf 00:18:44.642 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:44.899 [2024-11-20 11:31:30.341736] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:44.899 nvme0n1 00:18:44.899 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:45.157 Running I/O for 1 seconds... 00:18:46.090 3826.00 IOPS, 14.95 MiB/s 00:18:46.090 Latency(us) 00:18:46.090 [2024-11-20T11:31:31.679Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:46.090 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:46.090 Verification LBA range: start 0x0 length 0x2000 00:18:46.090 nvme0n1 : 1.02 3881.13 15.16 0.00 0.00 32622.82 7417.48 23712.12 00:18:46.090 [2024-11-20T11:31:31.679Z] =================================================================================================================== 00:18:46.090 [2024-11-20T11:31:31.679Z] Total : 3881.13 15.16 0.00 0.00 32622.82 7417.48 23712.12 00:18:46.090 { 00:18:46.090 "results": [ 00:18:46.090 { 00:18:46.090 "job": "nvme0n1", 00:18:46.090 "core_mask": "0x2", 00:18:46.090 "workload": "verify", 00:18:46.090 "status": "finished", 00:18:46.090 "verify_range": { 00:18:46.091 "start": 0, 00:18:46.091 "length": 8192 00:18:46.091 }, 00:18:46.091 "queue_depth": 128, 00:18:46.091 "io_size": 4096, 00:18:46.091 "runtime": 1.018775, 00:18:46.091 "iops": 3881.1317513680647, 00:18:46.091 "mibps": 15.160670903781503, 00:18:46.091 "io_failed": 0, 00:18:46.091 "io_timeout": 0, 00:18:46.091 "avg_latency_us": 32622.81773485998, 00:18:46.091 "min_latency_us": 7417.483636363636, 00:18:46.091 "max_latency_us": 23712.116363636364 00:18:46.091 } 00:18:46.091 ], 00:18:46.091 "core_count": 1 00:18:46.091 } 00:18:46.091 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 84030 00:18:46.091 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84030 ']' 00:18:46.091 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84030 00:18:46.091 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:46.091 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:46.091 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84030 00:18:46.091 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:46.091 killing process with pid 84030 00:18:46.091 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:46.091 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84030' 00:18:46.091 Received shutdown signal, test time was about 1.000000 seconds 00:18:46.091 00:18:46.091 Latency(us) 00:18:46.091 [2024-11-20T11:31:31.680Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:46.091 [2024-11-20T11:31:31.680Z] =================================================================================================================== 00:18:46.091 [2024-11-20T11:31:31.680Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:46.091 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84030 00:18:46.091 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84030 00:18:46.350 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 83934 00:18:46.350 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83934 ']' 00:18:46.350 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83934 00:18:46.350 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:46.350 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:46.350 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83934 00:18:46.350 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:46.350 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:46.350 killing process with pid 83934 00:18:46.350 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83934' 00:18:46.350 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83934 00:18:46.350 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83934 00:18:46.608 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:18:46.608 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:46.608 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:46.608 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:46.608 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=84092 00:18:46.608 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:46.608 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 84092 00:18:46.608 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84092 ']' 00:18:46.608 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:46.608 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:46.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:46.608 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:46.608 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:46.608 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:46.608 [2024-11-20 11:31:32.043457] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:18:46.608 [2024-11-20 11:31:32.043564] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:46.868 [2024-11-20 11:31:32.198353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.868 [2024-11-20 11:31:32.235791] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:46.868 [2024-11-20 11:31:32.235857] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:46.868 [2024-11-20 11:31:32.235871] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:46.868 [2024-11-20 11:31:32.235881] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:46.868 [2024-11-20 11:31:32.235890] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:46.868 [2024-11-20 11:31:32.236240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:46.868 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:46.868 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:46.868 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:46.868 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:46.868 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:46.868 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:46.868 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:18:46.868 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.868 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:46.868 [2024-11-20 11:31:32.374113] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:46.868 malloc0 00:18:46.868 [2024-11-20 11:31:32.401449] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:46.868 [2024-11-20 11:31:32.401740] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:46.868 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.868 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=84123 00:18:46.868 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 84123 /var/tmp/bdevperf.sock 00:18:46.868 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:46.868 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84123 ']' 00:18:46.868 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:46.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:46.868 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:46.868 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:46.868 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:46.868 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:47.127 [2024-11-20 11:31:32.486420] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:18:47.127 [2024-11-20 11:31:32.486514] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84123 ] 00:18:47.127 [2024-11-20 11:31:32.637455] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:47.127 [2024-11-20 11:31:32.678162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:47.385 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:47.385 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:47.385 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.KsWL0pcwzf 00:18:47.644 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:47.901 [2024-11-20 11:31:33.382697] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:47.901 nvme0n1 00:18:47.901 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:48.160 Running I/O for 1 seconds... 00:18:49.096 3769.00 IOPS, 14.72 MiB/s 00:18:49.096 Latency(us) 00:18:49.096 [2024-11-20T11:31:34.685Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:49.096 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:49.096 Verification LBA range: start 0x0 length 0x2000 00:18:49.096 nvme0n1 : 1.02 3826.89 14.95 0.00 0.00 33112.49 7626.01 30146.56 00:18:49.096 [2024-11-20T11:31:34.685Z] =================================================================================================================== 00:18:49.096 [2024-11-20T11:31:34.685Z] Total : 3826.89 14.95 0.00 0.00 33112.49 7626.01 30146.56 00:18:49.096 { 00:18:49.096 "results": [ 00:18:49.096 { 00:18:49.096 "job": "nvme0n1", 00:18:49.096 "core_mask": "0x2", 00:18:49.096 "workload": "verify", 00:18:49.096 "status": "finished", 00:18:49.096 "verify_range": { 00:18:49.096 "start": 0, 00:18:49.096 "length": 8192 00:18:49.096 }, 00:18:49.096 "queue_depth": 128, 00:18:49.096 "io_size": 4096, 00:18:49.096 "runtime": 1.01832, 00:18:49.096 "iops": 3826.8913504595803, 00:18:49.096 "mibps": 14.948794337732735, 00:18:49.096 "io_failed": 0, 00:18:49.096 "io_timeout": 0, 00:18:49.096 "avg_latency_us": 33112.48939463923, 00:18:49.096 "min_latency_us": 7626.007272727273, 00:18:49.096 "max_latency_us": 30146.56 00:18:49.096 } 00:18:49.096 ], 00:18:49.096 "core_count": 1 00:18:49.096 } 00:18:49.096 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:18:49.096 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.096 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:49.355 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.355 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:18:49.355 "subsystems": [ 00:18:49.355 { 00:18:49.355 "subsystem": "keyring", 00:18:49.355 "config": [ 00:18:49.355 { 00:18:49.355 "method": "keyring_file_add_key", 00:18:49.355 "params": { 00:18:49.355 "name": "key0", 00:18:49.355 "path": "/tmp/tmp.KsWL0pcwzf" 00:18:49.355 } 00:18:49.355 } 00:18:49.355 ] 00:18:49.355 }, 00:18:49.355 { 00:18:49.355 "subsystem": "iobuf", 00:18:49.355 "config": [ 00:18:49.355 { 00:18:49.355 "method": "iobuf_set_options", 00:18:49.355 "params": { 00:18:49.355 "enable_numa": false, 00:18:49.355 "large_bufsize": 135168, 00:18:49.355 "large_pool_count": 1024, 00:18:49.355 "small_bufsize": 8192, 00:18:49.355 "small_pool_count": 8192 00:18:49.355 } 00:18:49.355 } 00:18:49.355 ] 00:18:49.355 }, 00:18:49.355 { 00:18:49.355 "subsystem": "sock", 00:18:49.355 "config": [ 00:18:49.355 { 00:18:49.355 "method": "sock_set_default_impl", 00:18:49.355 "params": { 00:18:49.355 "impl_name": "posix" 00:18:49.355 } 00:18:49.355 }, 00:18:49.355 { 00:18:49.355 "method": "sock_impl_set_options", 00:18:49.355 "params": { 00:18:49.355 "enable_ktls": false, 00:18:49.355 "enable_placement_id": 0, 00:18:49.355 "enable_quickack": false, 00:18:49.355 "enable_recv_pipe": true, 00:18:49.355 "enable_zerocopy_send_client": false, 00:18:49.355 "enable_zerocopy_send_server": true, 00:18:49.355 "impl_name": "ssl", 00:18:49.355 "recv_buf_size": 4096, 00:18:49.355 "send_buf_size": 4096, 00:18:49.355 "tls_version": 0, 00:18:49.355 "zerocopy_threshold": 0 00:18:49.355 } 00:18:49.355 }, 00:18:49.355 { 00:18:49.355 "method": "sock_impl_set_options", 00:18:49.355 "params": { 00:18:49.355 "enable_ktls": false, 00:18:49.355 "enable_placement_id": 0, 00:18:49.355 "enable_quickack": false, 00:18:49.355 "enable_recv_pipe": true, 00:18:49.355 "enable_zerocopy_send_client": false, 00:18:49.355 "enable_zerocopy_send_server": true, 00:18:49.355 "impl_name": "posix", 00:18:49.355 "recv_buf_size": 2097152, 00:18:49.355 "send_buf_size": 2097152, 00:18:49.355 "tls_version": 0, 00:18:49.355 "zerocopy_threshold": 0 00:18:49.355 } 00:18:49.355 } 00:18:49.355 ] 00:18:49.355 }, 00:18:49.355 { 00:18:49.355 "subsystem": "vmd", 00:18:49.355 "config": [] 00:18:49.355 }, 00:18:49.355 { 00:18:49.355 "subsystem": "accel", 00:18:49.355 "config": [ 00:18:49.355 { 00:18:49.355 "method": "accel_set_options", 00:18:49.355 "params": { 00:18:49.355 "buf_count": 2048, 00:18:49.355 "large_cache_size": 16, 00:18:49.355 "sequence_count": 2048, 00:18:49.355 "small_cache_size": 128, 00:18:49.355 "task_count": 2048 00:18:49.355 } 00:18:49.355 } 00:18:49.355 ] 00:18:49.355 }, 00:18:49.355 { 00:18:49.356 "subsystem": "bdev", 00:18:49.356 "config": [ 00:18:49.356 { 00:18:49.356 "method": "bdev_set_options", 00:18:49.356 "params": { 00:18:49.356 "bdev_auto_examine": true, 00:18:49.356 "bdev_io_cache_size": 256, 00:18:49.356 "bdev_io_pool_size": 65535, 00:18:49.356 "iobuf_large_cache_size": 16, 00:18:49.356 "iobuf_small_cache_size": 128 00:18:49.356 } 00:18:49.356 }, 00:18:49.356 { 00:18:49.356 "method": "bdev_raid_set_options", 00:18:49.356 "params": { 00:18:49.356 "process_max_bandwidth_mb_sec": 0, 00:18:49.356 "process_window_size_kb": 1024 00:18:49.356 } 00:18:49.356 }, 00:18:49.356 { 00:18:49.356 "method": "bdev_iscsi_set_options", 00:18:49.356 "params": { 00:18:49.356 "timeout_sec": 30 00:18:49.356 } 00:18:49.356 }, 00:18:49.356 { 00:18:49.356 "method": "bdev_nvme_set_options", 00:18:49.356 "params": { 00:18:49.356 "action_on_timeout": "none", 00:18:49.356 "allow_accel_sequence": false, 00:18:49.356 "arbitration_burst": 0, 00:18:49.356 "bdev_retry_count": 3, 00:18:49.356 "ctrlr_loss_timeout_sec": 0, 00:18:49.356 "delay_cmd_submit": true, 00:18:49.356 "dhchap_dhgroups": [ 00:18:49.356 "null", 00:18:49.356 "ffdhe2048", 00:18:49.356 "ffdhe3072", 00:18:49.356 "ffdhe4096", 00:18:49.356 "ffdhe6144", 00:18:49.356 "ffdhe8192" 00:18:49.356 ], 00:18:49.356 "dhchap_digests": [ 00:18:49.356 "sha256", 00:18:49.356 "sha384", 00:18:49.356 "sha512" 00:18:49.356 ], 00:18:49.356 "disable_auto_failback": false, 00:18:49.356 "fast_io_fail_timeout_sec": 0, 00:18:49.356 "generate_uuids": false, 00:18:49.356 "high_priority_weight": 0, 00:18:49.356 "io_path_stat": false, 00:18:49.356 "io_queue_requests": 0, 00:18:49.356 "keep_alive_timeout_ms": 10000, 00:18:49.356 "low_priority_weight": 0, 00:18:49.356 "medium_priority_weight": 0, 00:18:49.356 "nvme_adminq_poll_period_us": 10000, 00:18:49.356 "nvme_error_stat": false, 00:18:49.356 "nvme_ioq_poll_period_us": 0, 00:18:49.356 "rdma_cm_event_timeout_ms": 0, 00:18:49.356 "rdma_max_cq_size": 0, 00:18:49.356 "rdma_srq_size": 0, 00:18:49.356 "reconnect_delay_sec": 0, 00:18:49.356 "timeout_admin_us": 0, 00:18:49.356 "timeout_us": 0, 00:18:49.356 "transport_ack_timeout": 0, 00:18:49.356 "transport_retry_count": 4, 00:18:49.356 "transport_tos": 0 00:18:49.356 } 00:18:49.356 }, 00:18:49.356 { 00:18:49.356 "method": "bdev_nvme_set_hotplug", 00:18:49.356 "params": { 00:18:49.356 "enable": false, 00:18:49.356 "period_us": 100000 00:18:49.356 } 00:18:49.356 }, 00:18:49.356 { 00:18:49.356 "method": "bdev_malloc_create", 00:18:49.356 "params": { 00:18:49.356 "block_size": 4096, 00:18:49.356 "dif_is_head_of_md": false, 00:18:49.356 "dif_pi_format": 0, 00:18:49.356 "dif_type": 0, 00:18:49.356 "md_size": 0, 00:18:49.356 "name": "malloc0", 00:18:49.356 "num_blocks": 8192, 00:18:49.356 "optimal_io_boundary": 0, 00:18:49.356 "physical_block_size": 4096, 00:18:49.356 "uuid": "17fca253-56ad-4648-a362-f69946744e3f" 00:18:49.356 } 00:18:49.356 }, 00:18:49.356 { 00:18:49.356 "method": "bdev_wait_for_examine" 00:18:49.356 } 00:18:49.356 ] 00:18:49.356 }, 00:18:49.356 { 00:18:49.356 "subsystem": "nbd", 00:18:49.356 "config": [] 00:18:49.356 }, 00:18:49.356 { 00:18:49.356 "subsystem": "scheduler", 00:18:49.356 "config": [ 00:18:49.356 { 00:18:49.356 "method": "framework_set_scheduler", 00:18:49.356 "params": { 00:18:49.356 "name": "static" 00:18:49.356 } 00:18:49.356 } 00:18:49.356 ] 00:18:49.356 }, 00:18:49.356 { 00:18:49.356 "subsystem": "nvmf", 00:18:49.356 "config": [ 00:18:49.356 { 00:18:49.356 "method": "nvmf_set_config", 00:18:49.356 "params": { 00:18:49.356 "admin_cmd_passthru": { 00:18:49.356 "identify_ctrlr": false 00:18:49.356 }, 00:18:49.356 "dhchap_dhgroups": [ 00:18:49.356 "null", 00:18:49.356 "ffdhe2048", 00:18:49.356 "ffdhe3072", 00:18:49.356 "ffdhe4096", 00:18:49.356 "ffdhe6144", 00:18:49.356 "ffdhe8192" 00:18:49.356 ], 00:18:49.356 "dhchap_digests": [ 00:18:49.356 "sha256", 00:18:49.356 "sha384", 00:18:49.356 "sha512" 00:18:49.356 ], 00:18:49.356 "discovery_filter": "match_any" 00:18:49.356 } 00:18:49.356 }, 00:18:49.356 { 00:18:49.356 "method": "nvmf_set_max_subsystems", 00:18:49.356 "params": { 00:18:49.356 "max_subsystems": 1024 00:18:49.356 } 00:18:49.356 }, 00:18:49.356 { 00:18:49.356 "method": "nvmf_set_crdt", 00:18:49.356 "params": { 00:18:49.356 "crdt1": 0, 00:18:49.356 "crdt2": 0, 00:18:49.356 "crdt3": 0 00:18:49.356 } 00:18:49.356 }, 00:18:49.356 { 00:18:49.356 "method": "nvmf_create_transport", 00:18:49.356 "params": { 00:18:49.356 "abort_timeout_sec": 1, 00:18:49.356 "ack_timeout": 0, 00:18:49.356 "buf_cache_size": 4294967295, 00:18:49.356 "c2h_success": false, 00:18:49.356 "data_wr_pool_size": 0, 00:18:49.356 "dif_insert_or_strip": false, 00:18:49.356 "in_capsule_data_size": 4096, 00:18:49.356 "io_unit_size": 131072, 00:18:49.356 "max_aq_depth": 128, 00:18:49.356 "max_io_qpairs_per_ctrlr": 127, 00:18:49.356 "max_io_size": 131072, 00:18:49.356 "max_queue_depth": 128, 00:18:49.356 "num_shared_buffers": 511, 00:18:49.356 "sock_priority": 0, 00:18:49.356 "trtype": "TCP", 00:18:49.356 "zcopy": false 00:18:49.356 } 00:18:49.356 }, 00:18:49.356 { 00:18:49.356 "method": "nvmf_create_subsystem", 00:18:49.356 "params": { 00:18:49.356 "allow_any_host": false, 00:18:49.356 "ana_reporting": false, 00:18:49.356 "max_cntlid": 65519, 00:18:49.356 "max_namespaces": 32, 00:18:49.356 "min_cntlid": 1, 00:18:49.356 "model_number": "SPDK bdev Controller", 00:18:49.356 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:49.356 "serial_number": "00000000000000000000" 00:18:49.356 } 00:18:49.356 }, 00:18:49.356 { 00:18:49.356 "method": "nvmf_subsystem_add_host", 00:18:49.356 "params": { 00:18:49.356 "host": "nqn.2016-06.io.spdk:host1", 00:18:49.356 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:49.356 "psk": "key0" 00:18:49.356 } 00:18:49.356 }, 00:18:49.356 { 00:18:49.356 "method": "nvmf_subsystem_add_ns", 00:18:49.356 "params": { 00:18:49.356 "namespace": { 00:18:49.356 "bdev_name": "malloc0", 00:18:49.356 "nguid": "17FCA25356AD4648A362F69946744E3F", 00:18:49.356 "no_auto_visible": false, 00:18:49.356 "nsid": 1, 00:18:49.356 "uuid": "17fca253-56ad-4648-a362-f69946744e3f" 00:18:49.356 }, 00:18:49.356 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:18:49.356 } 00:18:49.356 }, 00:18:49.356 { 00:18:49.356 "method": "nvmf_subsystem_add_listener", 00:18:49.356 "params": { 00:18:49.356 "listen_address": { 00:18:49.356 "adrfam": "IPv4", 00:18:49.356 "traddr": "10.0.0.3", 00:18:49.356 "trsvcid": "4420", 00:18:49.356 "trtype": "TCP" 00:18:49.356 }, 00:18:49.356 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:49.356 "secure_channel": false, 00:18:49.356 "sock_impl": "ssl" 00:18:49.356 } 00:18:49.356 } 00:18:49.356 ] 00:18:49.356 } 00:18:49.356 ] 00:18:49.356 }' 00:18:49.356 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:49.615 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:18:49.615 "subsystems": [ 00:18:49.615 { 00:18:49.615 "subsystem": "keyring", 00:18:49.615 "config": [ 00:18:49.615 { 00:18:49.615 "method": "keyring_file_add_key", 00:18:49.615 "params": { 00:18:49.615 "name": "key0", 00:18:49.615 "path": "/tmp/tmp.KsWL0pcwzf" 00:18:49.615 } 00:18:49.615 } 00:18:49.615 ] 00:18:49.615 }, 00:18:49.615 { 00:18:49.615 "subsystem": "iobuf", 00:18:49.615 "config": [ 00:18:49.615 { 00:18:49.615 "method": "iobuf_set_options", 00:18:49.615 "params": { 00:18:49.615 "enable_numa": false, 00:18:49.615 "large_bufsize": 135168, 00:18:49.615 "large_pool_count": 1024, 00:18:49.615 "small_bufsize": 8192, 00:18:49.615 "small_pool_count": 8192 00:18:49.615 } 00:18:49.615 } 00:18:49.615 ] 00:18:49.615 }, 00:18:49.615 { 00:18:49.615 "subsystem": "sock", 00:18:49.615 "config": [ 00:18:49.615 { 00:18:49.615 "method": "sock_set_default_impl", 00:18:49.615 "params": { 00:18:49.615 "impl_name": "posix" 00:18:49.615 } 00:18:49.615 }, 00:18:49.615 { 00:18:49.615 "method": "sock_impl_set_options", 00:18:49.615 "params": { 00:18:49.615 "enable_ktls": false, 00:18:49.615 "enable_placement_id": 0, 00:18:49.615 "enable_quickack": false, 00:18:49.615 "enable_recv_pipe": true, 00:18:49.615 "enable_zerocopy_send_client": false, 00:18:49.615 "enable_zerocopy_send_server": true, 00:18:49.615 "impl_name": "ssl", 00:18:49.615 "recv_buf_size": 4096, 00:18:49.615 "send_buf_size": 4096, 00:18:49.615 "tls_version": 0, 00:18:49.615 "zerocopy_threshold": 0 00:18:49.615 } 00:18:49.615 }, 00:18:49.615 { 00:18:49.615 "method": "sock_impl_set_options", 00:18:49.615 "params": { 00:18:49.615 "enable_ktls": false, 00:18:49.615 "enable_placement_id": 0, 00:18:49.615 "enable_quickack": false, 00:18:49.615 "enable_recv_pipe": true, 00:18:49.615 "enable_zerocopy_send_client": false, 00:18:49.615 "enable_zerocopy_send_server": true, 00:18:49.615 "impl_name": "posix", 00:18:49.615 "recv_buf_size": 2097152, 00:18:49.615 "send_buf_size": 2097152, 00:18:49.615 "tls_version": 0, 00:18:49.615 "zerocopy_threshold": 0 00:18:49.615 } 00:18:49.615 } 00:18:49.615 ] 00:18:49.615 }, 00:18:49.615 { 00:18:49.615 "subsystem": "vmd", 00:18:49.615 "config": [] 00:18:49.615 }, 00:18:49.615 { 00:18:49.615 "subsystem": "accel", 00:18:49.615 "config": [ 00:18:49.615 { 00:18:49.615 "method": "accel_set_options", 00:18:49.615 "params": { 00:18:49.615 "buf_count": 2048, 00:18:49.615 "large_cache_size": 16, 00:18:49.615 "sequence_count": 2048, 00:18:49.615 "small_cache_size": 128, 00:18:49.615 "task_count": 2048 00:18:49.615 } 00:18:49.615 } 00:18:49.615 ] 00:18:49.615 }, 00:18:49.615 { 00:18:49.615 "subsystem": "bdev", 00:18:49.616 "config": [ 00:18:49.616 { 00:18:49.616 "method": "bdev_set_options", 00:18:49.616 "params": { 00:18:49.616 "bdev_auto_examine": true, 00:18:49.616 "bdev_io_cache_size": 256, 00:18:49.616 "bdev_io_pool_size": 65535, 00:18:49.616 "iobuf_large_cache_size": 16, 00:18:49.616 "iobuf_small_cache_size": 128 00:18:49.616 } 00:18:49.616 }, 00:18:49.616 { 00:18:49.616 "method": "bdev_raid_set_options", 00:18:49.616 "params": { 00:18:49.616 "process_max_bandwidth_mb_sec": 0, 00:18:49.616 "process_window_size_kb": 1024 00:18:49.616 } 00:18:49.616 }, 00:18:49.616 { 00:18:49.616 "method": "bdev_iscsi_set_options", 00:18:49.616 "params": { 00:18:49.616 "timeout_sec": 30 00:18:49.616 } 00:18:49.616 }, 00:18:49.616 { 00:18:49.616 "method": "bdev_nvme_set_options", 00:18:49.616 "params": { 00:18:49.616 "action_on_timeout": "none", 00:18:49.616 "allow_accel_sequence": false, 00:18:49.616 "arbitration_burst": 0, 00:18:49.616 "bdev_retry_count": 3, 00:18:49.616 "ctrlr_loss_timeout_sec": 0, 00:18:49.616 "delay_cmd_submit": true, 00:18:49.616 "dhchap_dhgroups": [ 00:18:49.616 "null", 00:18:49.616 "ffdhe2048", 00:18:49.616 "ffdhe3072", 00:18:49.616 "ffdhe4096", 00:18:49.616 "ffdhe6144", 00:18:49.616 "ffdhe8192" 00:18:49.616 ], 00:18:49.616 "dhchap_digests": [ 00:18:49.616 "sha256", 00:18:49.616 "sha384", 00:18:49.616 "sha512" 00:18:49.616 ], 00:18:49.616 "disable_auto_failback": false, 00:18:49.616 "fast_io_fail_timeout_sec": 0, 00:18:49.616 "generate_uuids": false, 00:18:49.616 "high_priority_weight": 0, 00:18:49.616 "io_path_stat": false, 00:18:49.616 "io_queue_requests": 512, 00:18:49.616 "keep_alive_timeout_ms": 10000, 00:18:49.616 "low_priority_weight": 0, 00:18:49.616 "medium_priority_weight": 0, 00:18:49.616 "nvme_adminq_poll_period_us": 10000, 00:18:49.616 "nvme_error_stat": false, 00:18:49.616 "nvme_ioq_poll_period_us": 0, 00:18:49.616 "rdma_cm_event_timeout_ms": 0, 00:18:49.616 "rdma_max_cq_size": 0, 00:18:49.616 "rdma_srq_size": 0, 00:18:49.616 "reconnect_delay_sec": 0, 00:18:49.616 "timeout_admin_us": 0, 00:18:49.616 "timeout_us": 0, 00:18:49.616 "transport_ack_timeout": 0, 00:18:49.616 "transport_retry_count": 4, 00:18:49.616 "transport_tos": 0 00:18:49.616 } 00:18:49.616 }, 00:18:49.616 { 00:18:49.616 "method": "bdev_nvme_attach_controller", 00:18:49.616 "params": { 00:18:49.616 "adrfam": "IPv4", 00:18:49.616 "ctrlr_loss_timeout_sec": 0, 00:18:49.616 "ddgst": false, 00:18:49.616 "fast_io_fail_timeout_sec": 0, 00:18:49.616 "hdgst": false, 00:18:49.616 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:49.616 "multipath": "multipath", 00:18:49.616 "name": "nvme0", 00:18:49.616 "prchk_guard": false, 00:18:49.616 "prchk_reftag": false, 00:18:49.616 "psk": "key0", 00:18:49.616 "reconnect_delay_sec": 0, 00:18:49.616 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:49.616 "traddr": "10.0.0.3", 00:18:49.616 "trsvcid": "4420", 00:18:49.616 "trtype": "TCP" 00:18:49.616 } 00:18:49.616 }, 00:18:49.616 { 00:18:49.616 "method": "bdev_nvme_set_hotplug", 00:18:49.616 "params": { 00:18:49.616 "enable": false, 00:18:49.616 "period_us": 100000 00:18:49.616 } 00:18:49.616 }, 00:18:49.616 { 00:18:49.616 "method": "bdev_enable_histogram", 00:18:49.616 "params": { 00:18:49.616 "enable": true, 00:18:49.616 "name": "nvme0n1" 00:18:49.616 } 00:18:49.616 }, 00:18:49.616 { 00:18:49.616 "method": "bdev_wait_for_examine" 00:18:49.616 } 00:18:49.616 ] 00:18:49.616 }, 00:18:49.616 { 00:18:49.616 "subsystem": "nbd", 00:18:49.616 "config": [] 00:18:49.616 } 00:18:49.616 ] 00:18:49.616 }' 00:18:49.616 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 84123 00:18:49.616 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84123 ']' 00:18:49.616 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84123 00:18:49.616 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:49.616 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:49.616 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84123 00:18:49.616 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:49.616 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:49.616 killing process with pid 84123 00:18:49.616 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84123' 00:18:49.616 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84123 00:18:49.616 Received shutdown signal, test time was about 1.000000 seconds 00:18:49.616 00:18:49.616 Latency(us) 00:18:49.616 [2024-11-20T11:31:35.205Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:49.616 [2024-11-20T11:31:35.205Z] =================================================================================================================== 00:18:49.616 [2024-11-20T11:31:35.205Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:49.616 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84123 00:18:49.875 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 84092 00:18:49.875 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84092 ']' 00:18:49.875 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84092 00:18:49.875 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:49.875 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:49.875 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84092 00:18:49.875 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:49.875 killing process with pid 84092 00:18:49.875 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:49.875 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84092' 00:18:49.875 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84092 00:18:49.875 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84092 00:18:50.218 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:18:50.218 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:50.218 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:18:50.218 "subsystems": [ 00:18:50.218 { 00:18:50.218 "subsystem": "keyring", 00:18:50.218 "config": [ 00:18:50.218 { 00:18:50.218 "method": "keyring_file_add_key", 00:18:50.218 "params": { 00:18:50.218 "name": "key0", 00:18:50.218 "path": "/tmp/tmp.KsWL0pcwzf" 00:18:50.218 } 00:18:50.218 } 00:18:50.218 ] 00:18:50.218 }, 00:18:50.218 { 00:18:50.218 "subsystem": "iobuf", 00:18:50.218 "config": [ 00:18:50.218 { 00:18:50.218 "method": "iobuf_set_options", 00:18:50.218 "params": { 00:18:50.218 "enable_numa": false, 00:18:50.218 "large_bufsize": 135168, 00:18:50.218 "large_pool_count": 1024, 00:18:50.218 "small_bufsize": 8192, 00:18:50.218 "small_pool_count": 8192 00:18:50.218 } 00:18:50.218 } 00:18:50.218 ] 00:18:50.218 }, 00:18:50.218 { 00:18:50.218 "subsystem": "sock", 00:18:50.218 "config": [ 00:18:50.218 { 00:18:50.218 "method": "sock_set_default_impl", 00:18:50.218 "params": { 00:18:50.218 "impl_name": "posix" 00:18:50.218 } 00:18:50.218 }, 00:18:50.218 { 00:18:50.218 "method": "sock_impl_set_options", 00:18:50.218 "params": { 00:18:50.218 "enable_ktls": false, 00:18:50.218 "enable_placement_id": 0, 00:18:50.218 "enable_quickack": false, 00:18:50.218 "enable_recv_pipe": true, 00:18:50.218 "enable_zerocopy_send_client": false, 00:18:50.218 "enable_zerocopy_send_server": true, 00:18:50.218 "impl_name": "ssl", 00:18:50.218 "recv_buf_size": 4096, 00:18:50.218 "send_buf_size": 4096, 00:18:50.218 "tls_version": 0, 00:18:50.218 "zerocopy_threshold": 0 00:18:50.218 } 00:18:50.218 }, 00:18:50.218 { 00:18:50.218 "method": "sock_impl_set_options", 00:18:50.218 "params": { 00:18:50.218 "enable_ktls": false, 00:18:50.218 "enable_placement_id": 0, 00:18:50.218 "enable_quickack": false, 00:18:50.218 "enable_recv_pipe": true, 00:18:50.218 "enable_zerocopy_send_client": false, 00:18:50.218 "enable_zerocopy_send_server": true, 00:18:50.218 "impl_name": "posix", 00:18:50.218 "recv_buf_size": 2097152, 00:18:50.218 "send_buf_size": 2097152, 00:18:50.218 "tls_version": 0, 00:18:50.218 "zerocopy_threshold": 0 00:18:50.218 } 00:18:50.219 } 00:18:50.219 ] 00:18:50.219 }, 00:18:50.219 { 00:18:50.219 "subsystem": "vmd", 00:18:50.219 "config": [] 00:18:50.219 }, 00:18:50.219 { 00:18:50.219 "subsystem": "accel", 00:18:50.219 "config": [ 00:18:50.219 { 00:18:50.219 "method": "accel_set_options", 00:18:50.219 "params": { 00:18:50.219 "buf_count": 2048, 00:18:50.219 "large_cache_size": 16, 00:18:50.219 "sequence_count": 2048, 00:18:50.219 "small_cache_size": 128, 00:18:50.219 "task_count": 2048 00:18:50.219 } 00:18:50.219 } 00:18:50.219 ] 00:18:50.219 }, 00:18:50.219 { 00:18:50.219 "subsystem": "bdev", 00:18:50.219 "config": [ 00:18:50.219 { 00:18:50.219 "method": "bdev_set_options", 00:18:50.219 "params": { 00:18:50.219 "bdev_auto_examine": true, 00:18:50.219 "bdev_io_cache_size": 256, 00:18:50.219 "bdev_io_pool_size": 65535, 00:18:50.219 "iobuf_large_cache_size": 16, 00:18:50.219 "iobuf_small_cache_size": 128 00:18:50.219 } 00:18:50.219 }, 00:18:50.219 { 00:18:50.219 "method": "bdev_raid_set_options", 00:18:50.219 "params": { 00:18:50.219 "process_max_bandwidth_mb_sec": 0, 00:18:50.219 "process_window_size_kb": 1024 00:18:50.219 } 00:18:50.219 }, 00:18:50.219 { 00:18:50.219 "method": "bdev_iscsi_set_options", 00:18:50.219 "params": { 00:18:50.219 "timeout_sec": 30 00:18:50.219 } 00:18:50.219 }, 00:18:50.219 { 00:18:50.219 "method": "bdev_nvme_set_options", 00:18:50.219 "params": { 00:18:50.219 "action_on_timeout": "none", 00:18:50.219 "allow_accel_sequence": false, 00:18:50.219 "arbitration_burst": 0, 00:18:50.219 "bdev_retry_count": 3, 00:18:50.219 "ctrlr_loss_timeout_sec": 0, 00:18:50.219 "delay_cmd_submit": true, 00:18:50.219 "dhchap_dhgroups": [ 00:18:50.219 "null", 00:18:50.219 "ffdhe2048", 00:18:50.219 "ffdhe3072", 00:18:50.219 "ffdhe4096", 00:18:50.219 "ffdhe6144", 00:18:50.219 "ffdhe8192" 00:18:50.219 ], 00:18:50.219 "dhchap_digests": [ 00:18:50.219 "sha256", 00:18:50.219 "sha384", 00:18:50.219 "sha512" 00:18:50.219 ], 00:18:50.219 "disable_auto_failback": false, 00:18:50.219 "fast_io_fail_timeout_sec": 0, 00:18:50.219 "generate_uuids": false, 00:18:50.219 "high_priority_weight": 0, 00:18:50.219 "io_path_stat": false, 00:18:50.219 "io_queue_requests": 0, 00:18:50.219 "keep_alive_timeout_ms": 10000, 00:18:50.219 "low_priority_weight": 0, 00:18:50.219 "medium_priority_weight": 0, 00:18:50.219 "nvme_adminq_poll_period_us": 10000, 00:18:50.219 "nvme_error_stat": false, 00:18:50.219 "nvme_ioq_poll_period_us": 0, 00:18:50.219 "rdma_cm_event_timeout_ms": 0, 00:18:50.219 "rdma_max_cq_size": 0, 00:18:50.219 "rdma_srq_size": 0, 00:18:50.219 "reconnect_delay_sec": 0, 00:18:50.219 "timeout_admin_us": 0, 00:18:50.219 "timeout_us": 0, 00:18:50.219 "transport_ack_timeout": 0, 00:18:50.219 "transport_retry_count": 4, 00:18:50.219 "transport_tos": 0 00:18:50.219 } 00:18:50.219 }, 00:18:50.219 { 00:18:50.219 "method": "bdev_nvme_set_hotplug", 00:18:50.219 "params": { 00:18:50.219 "enable": false, 00:18:50.219 "period_us": 100000 00:18:50.219 } 00:18:50.219 }, 00:18:50.219 { 00:18:50.219 "method": "bdev_malloc_create", 00:18:50.219 "params": { 00:18:50.219 "block_size": 4096, 00:18:50.219 "dif_is_head_of_md": false, 00:18:50.219 "dif_pi_format": 0, 00:18:50.219 "dif_type": 0, 00:18:50.219 "md_size": 0, 00:18:50.219 "name": "malloc0", 00:18:50.219 "num_blocks": 8192, 00:18:50.219 "optimal_io_boundary": 0, 00:18:50.219 "physical_block_size": 4096, 00:18:50.219 "uuid": "17fca253-56ad-4648-a362-f69946744e3f" 00:18:50.219 } 00:18:50.219 }, 00:18:50.219 { 00:18:50.219 "method": "bdev_wait_for_examine" 00:18:50.219 } 00:18:50.219 ] 00:18:50.219 }, 00:18:50.219 { 00:18:50.219 "subsystem": "nbd", 00:18:50.219 "config": [] 00:18:50.219 }, 00:18:50.219 { 00:18:50.219 "subsystem": "scheduler", 00:18:50.219 "config": [ 00:18:50.219 { 00:18:50.219 "method": "framework_set_scheduler", 00:18:50.219 "params": { 00:18:50.219 "name": "static" 00:18:50.219 } 00:18:50.219 } 00:18:50.219 ] 00:18:50.219 }, 00:18:50.219 { 00:18:50.219 "subsystem": "nvmf", 00:18:50.219 "config": [ 00:18:50.219 { 00:18:50.219 "method": "nvmf_set_config", 00:18:50.219 "params": { 00:18:50.219 "admin_cmd_passthru": { 00:18:50.219 "identify_ctrlr": false 00:18:50.219 }, 00:18:50.219 "dhchap_dhgroups": [ 00:18:50.219 "null", 00:18:50.219 "ffdhe2048", 00:18:50.219 "ffdhe3072", 00:18:50.219 "ffdhe4096", 00:18:50.219 "ffdhe6144", 00:18:50.219 "ffdhe8192" 00:18:50.219 ], 00:18:50.219 "dhchap_digests": [ 00:18:50.219 "sha256", 00:18:50.219 "sha384", 00:18:50.219 "sha512" 00:18:50.219 ], 00:18:50.219 "discovery_filter": "match_any" 00:18:50.219 } 00:18:50.219 }, 00:18:50.219 { 00:18:50.219 "method": "nvmf_set_max_subsystems", 00:18:50.219 "params": { 00:18:50.219 "max_subsystems": 1024 00:18:50.219 } 00:18:50.219 }, 00:18:50.219 { 00:18:50.219 "method": "nvmf_set_crdt", 00:18:50.219 "params": { 00:18:50.219 "crdt1": 0, 00:18:50.219 "crdt2": 0, 00:18:50.219 "crdt3": 0 00:18:50.219 } 00:18:50.219 }, 00:18:50.219 { 00:18:50.219 "method": "nvmf_create_transport", 00:18:50.219 "params": { 00:18:50.219 "abort_timeout_sec": 1, 00:18:50.219 "ack_timeout": 0, 00:18:50.219 "buf_cache_size": 4294967295, 00:18:50.219 "c2h_success": false, 00:18:50.219 "data_wr_pool_size": 0, 00:18:50.219 "dif_insert_or_strip": false, 00:18:50.219 "in_capsule_data_size": 4096, 00:18:50.219 "io_unit_size": 131072, 00:18:50.219 "max_aq_depth": 128, 00:18:50.219 "max_io_qpairs_per_ctrlr": 127, 00:18:50.219 "max_io_size": 131072, 00:18:50.219 "max_queue_depth": 128, 00:18:50.219 "num_shared_buffers": 511, 00:18:50.219 "sock_priority": 0, 00:18:50.219 "trtype": "TCP", 00:18:50.219 "zcopy": false 00:18:50.219 } 00:18:50.219 }, 00:18:50.219 { 00:18:50.219 "method": "nvmf_create_subsystem", 00:18:50.219 "params": { 00:18:50.219 "allow_any_host": false, 00:18:50.219 "ana_reporting": false, 00:18:50.219 "max_cntlid": 65519, 00:18:50.219 "max_namespaces": 32, 00:18:50.219 "min_cntlid": 1, 00:18:50.219 "model_number": "SPDK bdev Controller", 00:18:50.219 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:50.219 "serial_number": "00000000000000000000" 00:18:50.219 } 00:18:50.219 }, 00:18:50.219 { 00:18:50.219 "method": "nvmf_subsystem_add_host", 00:18:50.219 "params": { 00:18:50.219 "host": "nqn.2016-06.io.spdk:host1", 00:18:50.219 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:50.219 "psk": "key0" 00:18:50.219 } 00:18:50.219 }, 00:18:50.219 { 00:18:50.219 "method": "nvmf_subsystem_add_ns", 00:18:50.219 "params": { 00:18:50.219 "namespace": { 00:18:50.219 "bdev_name": "malloc0", 00:18:50.219 "nguid": "17FCA25356AD4648A362F69946744E3F", 00:18:50.219 "no_auto_visible": false, 00:18:50.219 "nsid": 1, 00:18:50.219 "uuid": "17fca253-56ad-4648-a362-f69946744e3f" 00:18:50.219 }, 00:18:50.219 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:18:50.219 } 00:18:50.219 }, 00:18:50.219 { 00:18:50.219 "method": "nvmf_subsystem_add_listener", 00:18:50.219 "params": { 00:18:50.219 "listen_address": { 00:18:50.219 "adrfam": "IPv4", 00:18:50.219 "traddr": "10.0.0.3", 00:18:50.219 "trsvcid": "4420", 00:18:50.219 "trtype": "TCP" 00:18:50.219 }, 00:18:50.219 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:50.219 "secure_channel": false, 00:18:50.219 "sock_impl": "ssl" 00:18:50.219 } 00:18:50.219 } 00:18:50.219 ] 00:18:50.219 } 00:18:50.219 ] 00:18:50.219 }' 00:18:50.219 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:50.219 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:50.219 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=84200 00:18:50.219 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:18:50.219 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 84200 00:18:50.219 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84200 ']' 00:18:50.219 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:50.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:50.219 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:50.219 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:50.219 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:50.219 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:50.219 [2024-11-20 11:31:35.547970] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:18:50.219 [2024-11-20 11:31:35.548066] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:50.220 [2024-11-20 11:31:35.697013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.220 [2024-11-20 11:31:35.734513] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:50.220 [2024-11-20 11:31:35.734578] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:50.220 [2024-11-20 11:31:35.734592] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:50.220 [2024-11-20 11:31:35.734602] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:50.220 [2024-11-20 11:31:35.734611] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:50.220 [2024-11-20 11:31:35.735037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:50.479 [2024-11-20 11:31:35.933199] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:50.479 [2024-11-20 11:31:35.965141] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:50.479 [2024-11-20 11:31:35.965389] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:51.416 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:51.416 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:51.416 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:51.416 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:51.416 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:51.416 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:51.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:51.416 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=84244 00:18:51.416 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 84244 /var/tmp/bdevperf.sock 00:18:51.416 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84244 ']' 00:18:51.416 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:51.416 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:51.416 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:51.416 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:18:51.416 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:51.416 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:51.416 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:18:51.416 "subsystems": [ 00:18:51.416 { 00:18:51.416 "subsystem": "keyring", 00:18:51.416 "config": [ 00:18:51.416 { 00:18:51.416 "method": "keyring_file_add_key", 00:18:51.416 "params": { 00:18:51.416 "name": "key0", 00:18:51.416 "path": "/tmp/tmp.KsWL0pcwzf" 00:18:51.416 } 00:18:51.416 } 00:18:51.416 ] 00:18:51.416 }, 00:18:51.416 { 00:18:51.416 "subsystem": "iobuf", 00:18:51.416 "config": [ 00:18:51.416 { 00:18:51.416 "method": "iobuf_set_options", 00:18:51.416 "params": { 00:18:51.416 "enable_numa": false, 00:18:51.416 "large_bufsize": 135168, 00:18:51.416 "large_pool_count": 1024, 00:18:51.416 "small_bufsize": 8192, 00:18:51.416 "small_pool_count": 8192 00:18:51.416 } 00:18:51.416 } 00:18:51.416 ] 00:18:51.416 }, 00:18:51.416 { 00:18:51.416 "subsystem": "sock", 00:18:51.416 "config": [ 00:18:51.416 { 00:18:51.416 "method": "sock_set_default_impl", 00:18:51.416 "params": { 00:18:51.416 "impl_name": "posix" 00:18:51.416 } 00:18:51.416 }, 00:18:51.416 { 00:18:51.416 "method": "sock_impl_set_options", 00:18:51.416 "params": { 00:18:51.416 "enable_ktls": false, 00:18:51.416 "enable_placement_id": 0, 00:18:51.416 "enable_quickack": false, 00:18:51.416 "enable_recv_pipe": true, 00:18:51.416 "enable_zerocopy_send_client": false, 00:18:51.416 "enable_zerocopy_send_server": true, 00:18:51.416 "impl_name": "ssl", 00:18:51.416 "recv_buf_size": 4096, 00:18:51.416 "send_buf_size": 4096, 00:18:51.416 "tls_version": 0, 00:18:51.416 "zerocopy_threshold": 0 00:18:51.416 } 00:18:51.416 }, 00:18:51.416 { 00:18:51.416 "method": "sock_impl_set_options", 00:18:51.416 "params": { 00:18:51.416 "enable_ktls": false, 00:18:51.416 "enable_placement_id": 0, 00:18:51.416 "enable_quickack": false, 00:18:51.416 "enable_recv_pipe": true, 00:18:51.416 "enable_zerocopy_send_client": false, 00:18:51.416 "enable_zerocopy_send_server": true, 00:18:51.416 "impl_name": "posix", 00:18:51.416 "recv_buf_size": 2097152, 00:18:51.416 "send_buf_size": 2097152, 00:18:51.416 "tls_version": 0, 00:18:51.416 "zerocopy_threshold": 0 00:18:51.416 } 00:18:51.416 } 00:18:51.416 ] 00:18:51.416 }, 00:18:51.416 { 00:18:51.416 "subsystem": "vmd", 00:18:51.416 "config": [] 00:18:51.416 }, 00:18:51.416 { 00:18:51.416 "subsystem": "accel", 00:18:51.416 "config": [ 00:18:51.416 { 00:18:51.417 "method": "accel_set_options", 00:18:51.417 "params": { 00:18:51.417 "buf_count": 2048, 00:18:51.417 "large_cache_size": 16, 00:18:51.417 "sequence_count": 2048, 00:18:51.417 "small_cache_size": 128, 00:18:51.417 "task_count": 2048 00:18:51.417 } 00:18:51.417 } 00:18:51.417 ] 00:18:51.417 }, 00:18:51.417 { 00:18:51.417 "subsystem": "bdev", 00:18:51.417 "config": [ 00:18:51.417 { 00:18:51.417 "method": "bdev_set_options", 00:18:51.417 "params": { 00:18:51.417 "bdev_auto_examine": true, 00:18:51.417 "bdev_io_cache_size": 256, 00:18:51.417 "bdev_io_pool_size": 65535, 00:18:51.417 "iobuf_large_cache_size": 16, 00:18:51.417 "iobuf_small_cache_size": 128 00:18:51.417 } 00:18:51.417 }, 00:18:51.417 { 00:18:51.417 "method": "bdev_raid_set_options", 00:18:51.417 "params": { 00:18:51.417 "process_max_bandwidth_mb_sec": 0, 00:18:51.417 "process_window_size_kb": 1024 00:18:51.417 } 00:18:51.417 }, 00:18:51.417 { 00:18:51.417 "method": "bdev_iscsi_set_options", 00:18:51.417 "params": { 00:18:51.417 "timeout_sec": 30 00:18:51.417 } 00:18:51.417 }, 00:18:51.417 { 00:18:51.417 "method": "bdev_nvme_set_options", 00:18:51.417 "params": { 00:18:51.417 "action_on_timeout": "none", 00:18:51.417 "allow_accel_sequence": false, 00:18:51.417 "arbitration_burst": 0, 00:18:51.417 "bdev_retry_count": 3, 00:18:51.417 "ctrlr_loss_timeout_sec": 0, 00:18:51.417 "delay_cmd_submit": true, 00:18:51.417 "dhchap_dhgroups": [ 00:18:51.417 "null", 00:18:51.417 "ffdhe2048", 00:18:51.417 "ffdhe3072", 00:18:51.417 "ffdhe4096", 00:18:51.417 "ffdhe6144", 00:18:51.417 "ffdhe8192" 00:18:51.417 ], 00:18:51.417 "dhchap_digests": [ 00:18:51.417 "sha256", 00:18:51.417 "sha384", 00:18:51.417 "sha512" 00:18:51.417 ], 00:18:51.417 "disable_auto_failback": false, 00:18:51.417 "fast_io_fail_timeout_sec": 0, 00:18:51.417 "generate_uuids": false, 00:18:51.417 "high_priority_weight": 0, 00:18:51.417 "io_path_stat": false, 00:18:51.417 "io_queue_requests": 512, 00:18:51.417 "keep_alive_timeout_ms": 10000, 00:18:51.417 "low_priority_weight": 0, 00:18:51.417 "medium_priority_weight": 0, 00:18:51.417 "nvme_adminq_poll_period_us": 10000, 00:18:51.417 "nvme_error_stat": false, 00:18:51.417 "nvme_ioq_poll_period_us": 0, 00:18:51.417 "rdma_cm_event_timeout_ms": 0, 00:18:51.417 "rdma_max_cq_size": 0, 00:18:51.417 "rdma_srq_size": 0, 00:18:51.417 "reconnect_delay_sec": 0, 00:18:51.417 "timeout_admin_us": 0, 00:18:51.417 "timeout_us": 0, 00:18:51.417 "transport_ack_timeout": 0, 00:18:51.417 "transport_retry_count": 4, 00:18:51.417 "transport_tos": 0 00:18:51.417 } 00:18:51.417 }, 00:18:51.417 { 00:18:51.417 "method": "bdev_nvme_attach_controller", 00:18:51.417 "params": { 00:18:51.417 "adrfam": "IPv4", 00:18:51.417 "ctrlr_loss_timeout_sec": 0, 00:18:51.417 "ddgst": false, 00:18:51.417 "fast_io_fail_timeout_sec": 0, 00:18:51.417 "hdgst": false, 00:18:51.417 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:51.417 "multipath": "multipath", 00:18:51.417 "name": "nvme0", 00:18:51.417 "prchk_guard": false, 00:18:51.417 "prchk_reftag": false, 00:18:51.417 "psk": "key0", 00:18:51.417 "reconnect_delay_sec": 0, 00:18:51.417 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:51.417 "traddr": "10.0.0.3", 00:18:51.417 "trsvcid": "4420", 00:18:51.417 "trtype": "TCP" 00:18:51.417 } 00:18:51.417 }, 00:18:51.417 { 00:18:51.417 "method": "bdev_nvme_set_hotplug", 00:18:51.417 "params": { 00:18:51.417 "enable": false, 00:18:51.417 "period_us": 100000 00:18:51.417 } 00:18:51.417 }, 00:18:51.417 { 00:18:51.417 "method": "bdev_enable_histogram", 00:18:51.417 "params": { 00:18:51.417 "enable": true, 00:18:51.417 "name": "nvme0n1" 00:18:51.417 } 00:18:51.417 }, 00:18:51.417 { 00:18:51.417 "method": "bdev_wait_for_examine" 00:18:51.417 } 00:18:51.417 ] 00:18:51.417 }, 00:18:51.417 { 00:18:51.417 "subsystem": "nbd", 00:18:51.417 "config": [] 00:18:51.417 } 00:18:51.417 ] 00:18:51.417 }' 00:18:51.417 [2024-11-20 11:31:36.730515] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:18:51.417 [2024-11-20 11:31:36.730655] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84244 ] 00:18:51.417 [2024-11-20 11:31:36.878413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:51.417 [2024-11-20 11:31:36.921835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:51.676 [2024-11-20 11:31:37.058121] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:52.242 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:52.242 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:52.242 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:18:52.242 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:52.808 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.808 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:52.808 Running I/O for 1 seconds... 00:18:53.743 3946.00 IOPS, 15.41 MiB/s 00:18:53.743 Latency(us) 00:18:53.743 [2024-11-20T11:31:39.332Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:53.743 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:53.743 Verification LBA range: start 0x0 length 0x2000 00:18:53.743 nvme0n1 : 1.03 3959.94 15.47 0.00 0.00 31921.20 9949.56 22043.93 00:18:53.743 [2024-11-20T11:31:39.332Z] =================================================================================================================== 00:18:53.743 [2024-11-20T11:31:39.332Z] Total : 3959.94 15.47 0.00 0.00 31921.20 9949.56 22043.93 00:18:53.743 { 00:18:53.743 "results": [ 00:18:53.743 { 00:18:53.743 "job": "nvme0n1", 00:18:53.743 "core_mask": "0x2", 00:18:53.743 "workload": "verify", 00:18:53.743 "status": "finished", 00:18:53.743 "verify_range": { 00:18:53.743 "start": 0, 00:18:53.743 "length": 8192 00:18:53.743 }, 00:18:53.743 "queue_depth": 128, 00:18:53.743 "io_size": 4096, 00:18:53.743 "runtime": 1.028804, 00:18:53.743 "iops": 3959.937947364124, 00:18:53.743 "mibps": 15.468507606891109, 00:18:53.743 "io_failed": 0, 00:18:53.743 "io_timeout": 0, 00:18:53.743 "avg_latency_us": 31921.204509305127, 00:18:53.743 "min_latency_us": 9949.556363636364, 00:18:53.743 "max_latency_us": 22043.927272727273 00:18:53.743 } 00:18:53.743 ], 00:18:53.743 "core_count": 1 00:18:53.743 } 00:18:53.743 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:18:53.743 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:18:53.743 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:18:53.743 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:18:53.743 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:18:53.743 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:18:53.743 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:53.743 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:18:53.743 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:18:53.743 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:18:53.743 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:53.743 nvmf_trace.0 00:18:54.002 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:18:54.002 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 84244 00:18:54.002 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84244 ']' 00:18:54.002 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84244 00:18:54.002 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:54.002 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:54.002 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84244 00:18:54.002 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:54.002 killing process with pid 84244 00:18:54.002 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:54.002 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84244' 00:18:54.002 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84244 00:18:54.002 Received shutdown signal, test time was about 1.000000 seconds 00:18:54.002 00:18:54.002 Latency(us) 00:18:54.002 [2024-11-20T11:31:39.591Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:54.002 [2024-11-20T11:31:39.591Z] =================================================================================================================== 00:18:54.002 [2024-11-20T11:31:39.591Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:54.002 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84244 00:18:54.002 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:18:54.002 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:54.002 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:18:54.261 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:54.261 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:18:54.261 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:54.261 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:54.261 rmmod nvme_tcp 00:18:54.261 rmmod nvme_fabrics 00:18:54.261 rmmod nvme_keyring 00:18:54.261 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:54.261 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:18:54.261 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:18:54.261 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 84200 ']' 00:18:54.261 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 84200 00:18:54.261 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84200 ']' 00:18:54.261 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84200 00:18:54.261 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:54.261 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:54.261 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84200 00:18:54.261 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:54.261 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:54.261 killing process with pid 84200 00:18:54.261 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84200' 00:18:54.261 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84200 00:18:54.261 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84200 00:18:54.518 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:54.518 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:54.518 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:54.518 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:18:54.518 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:18:54.518 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:54.518 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:18:54.519 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:54.519 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:54.519 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:54.519 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:54.519 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:54.519 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:54.519 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:54.519 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:54.519 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:54.519 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:54.519 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:54.519 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:54.519 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:54.777 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:54.777 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:54.777 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:54.777 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:54.777 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:54.777 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:54.777 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:18:54.777 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.7HhWLWcIWE /tmp/tmp.Lq7nCxeCxx /tmp/tmp.KsWL0pcwzf 00:18:54.777 00:18:54.777 real 1m24.511s 00:18:54.777 user 2m19.899s 00:18:54.777 sys 0m26.588s 00:18:54.777 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:54.777 ************************************ 00:18:54.777 END TEST nvmf_tls 00:18:54.777 ************************************ 00:18:54.777 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:54.777 11:31:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:54.777 11:31:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:54.777 11:31:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:54.777 11:31:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:54.777 ************************************ 00:18:54.777 START TEST nvmf_fips 00:18:54.777 ************************************ 00:18:54.777 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:54.777 * Looking for test storage... 00:18:54.777 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:18:54.777 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:54.777 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:18:54.777 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:55.037 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:55.037 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:55.037 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:55.037 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:55.037 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:18:55.038 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:18:55.038 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:18:55.038 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:18:55.038 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:18:55.038 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:18:55.038 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:18:55.038 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:55.038 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:18:55.038 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:18:55.038 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:55.038 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:55.038 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:18:55.038 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:18:55.038 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:55.038 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:18:55.038 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:18:55.038 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:18:55.038 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:18:55.038 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:55.038 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:18:55.038 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:18:55.038 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:55.038 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:55.038 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:18:55.038 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:55.038 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:55.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.038 --rc genhtml_branch_coverage=1 00:18:55.038 --rc genhtml_function_coverage=1 00:18:55.038 --rc genhtml_legend=1 00:18:55.038 --rc geninfo_all_blocks=1 00:18:55.038 --rc geninfo_unexecuted_blocks=1 00:18:55.038 00:18:55.038 ' 00:18:55.038 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:55.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.038 --rc genhtml_branch_coverage=1 00:18:55.038 --rc genhtml_function_coverage=1 00:18:55.038 --rc genhtml_legend=1 00:18:55.038 --rc geninfo_all_blocks=1 00:18:55.038 --rc geninfo_unexecuted_blocks=1 00:18:55.038 00:18:55.038 ' 00:18:55.038 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:55.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.038 --rc genhtml_branch_coverage=1 00:18:55.038 --rc genhtml_function_coverage=1 00:18:55.038 --rc genhtml_legend=1 00:18:55.038 --rc geninfo_all_blocks=1 00:18:55.038 --rc geninfo_unexecuted_blocks=1 00:18:55.038 00:18:55.038 ' 00:18:55.038 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:55.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.038 --rc genhtml_branch_coverage=1 00:18:55.038 --rc genhtml_function_coverage=1 00:18:55.038 --rc genhtml_legend=1 00:18:55.038 --rc geninfo_all_blocks=1 00:18:55.038 --rc geninfo_unexecuted_blocks=1 00:18:55.038 00:18:55.038 ' 00:18:55.038 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:55.038 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:18:55.038 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:55.038 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:55.038 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:55.038 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:55.038 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:55.038 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:55.038 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:55.038 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:55.038 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:55.038 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:55.038 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:18:55.038 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=806d442c-08ba-4719-b76d-33169283459a 00:18:55.038 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:55.038 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:55.038 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:55.038 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:55.038 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:55.038 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:18:55.038 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:55.038 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:55.038 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:55.038 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.038 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.038 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.038 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:18:55.038 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.038 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:18:55.038 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:55.038 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:55.038 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:55.038 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:55.038 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:55.038 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:55.038 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:55.038 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:55.038 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:55.038 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:55.038 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:55.038 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:18:55.038 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:18:55.038 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:18:55.038 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:18:55.038 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:18:55.038 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:18:55.038 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:55.038 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:55.038 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:18:55.038 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:18:55.038 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:18:55.038 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:18:55.039 Error setting digest 00:18:55.039 40224EC7467F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:18:55.039 40224EC7467F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:55.039 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:55.298 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:55.298 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:55.298 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:55.298 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:55.298 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:55.298 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:55.298 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:55.298 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:55.298 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:55.298 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:55.298 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:55.298 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:55.298 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:55.298 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:55.298 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:55.298 Cannot find device "nvmf_init_br" 00:18:55.298 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:18:55.298 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:55.298 Cannot find device "nvmf_init_br2" 00:18:55.298 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:18:55.298 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:55.298 Cannot find device "nvmf_tgt_br" 00:18:55.298 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:18:55.298 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:55.298 Cannot find device "nvmf_tgt_br2" 00:18:55.298 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:18:55.298 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:55.298 Cannot find device "nvmf_init_br" 00:18:55.298 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:18:55.298 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:55.298 Cannot find device "nvmf_init_br2" 00:18:55.298 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:18:55.298 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:55.298 Cannot find device "nvmf_tgt_br" 00:18:55.298 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:18:55.298 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:55.298 Cannot find device "nvmf_tgt_br2" 00:18:55.298 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:18:55.298 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:55.298 Cannot find device "nvmf_br" 00:18:55.298 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:18:55.298 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:55.298 Cannot find device "nvmf_init_if" 00:18:55.298 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:18:55.298 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:55.298 Cannot find device "nvmf_init_if2" 00:18:55.298 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:18:55.298 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:55.298 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:55.298 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:18:55.298 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:55.298 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:55.298 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:18:55.298 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:55.298 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:55.298 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:55.298 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:55.298 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:55.298 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:55.298 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:55.298 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:55.298 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:55.298 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:55.298 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:55.558 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:55.558 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:55.558 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:55.558 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:55.558 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:55.558 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:55.558 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:55.558 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:55.558 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:55.558 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:55.558 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:55.558 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:55.558 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:55.558 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:55.558 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:55.558 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:55.558 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:55.558 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:55.558 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:55.558 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:55.558 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:55.558 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:55.558 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:55.558 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.098 ms 00:18:55.558 00:18:55.558 --- 10.0.0.3 ping statistics --- 00:18:55.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:55.558 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:18:55.558 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:55.558 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:55.558 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:18:55.558 00:18:55.558 --- 10.0.0.4 ping statistics --- 00:18:55.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:55.558 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:18:55.558 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:55.558 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:55.558 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:18:55.558 00:18:55.558 --- 10.0.0.1 ping statistics --- 00:18:55.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:55.558 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:18:55.558 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:55.558 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:55.558 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:18:55.558 00:18:55.558 --- 10.0.0.2 ping statistics --- 00:18:55.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:55.558 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:18:55.558 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:55.558 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@461 -- # return 0 00:18:55.558 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:55.558 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:55.558 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:55.558 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:55.558 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:55.558 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:55.558 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:55.558 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:18:55.558 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:55.559 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:55.559 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:55.559 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=84585 00:18:55.559 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 84585 00:18:55.559 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 84585 ']' 00:18:55.559 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:55.559 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:55.559 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:55.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:55.559 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:55.559 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:55.559 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:55.559 [2024-11-20 11:31:41.119905] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:18:55.559 [2024-11-20 11:31:41.120019] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:55.817 [2024-11-20 11:31:41.269034] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:55.817 [2024-11-20 11:31:41.300413] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:55.817 [2024-11-20 11:31:41.300468] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:55.817 [2024-11-20 11:31:41.300480] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:55.817 [2024-11-20 11:31:41.300488] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:55.817 [2024-11-20 11:31:41.300495] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:55.817 [2024-11-20 11:31:41.300805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:55.817 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:55.817 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:18:55.817 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:55.817 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:55.817 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:56.076 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:56.076 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:18:56.076 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:56.076 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:18:56.076 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.0dd 00:18:56.076 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:56.076 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.0dd 00:18:56.076 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.0dd 00:18:56.076 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.0dd 00:18:56.076 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:56.334 [2024-11-20 11:31:41.764276] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:56.334 [2024-11-20 11:31:41.780227] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:56.334 [2024-11-20 11:31:41.780430] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:56.334 malloc0 00:18:56.334 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:56.334 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=84620 00:18:56.334 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:56.334 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 84620 /var/tmp/bdevperf.sock 00:18:56.334 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 84620 ']' 00:18:56.334 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:56.334 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:56.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:56.334 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:56.334 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:56.334 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:56.592 [2024-11-20 11:31:41.927902] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:18:56.592 [2024-11-20 11:31:41.928012] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84620 ] 00:18:56.592 [2024-11-20 11:31:42.072597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:56.592 [2024-11-20 11:31:42.108312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:57.527 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:57.527 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:18:57.527 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.0dd 00:18:57.785 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:58.042 [2024-11-20 11:31:43.547881] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:58.042 TLSTESTn1 00:18:58.300 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:58.301 Running I/O for 10 seconds... 00:19:00.171 3586.00 IOPS, 14.01 MiB/s [2024-11-20T11:31:47.191Z] 3730.50 IOPS, 14.57 MiB/s [2024-11-20T11:31:47.758Z] 3732.67 IOPS, 14.58 MiB/s [2024-11-20T11:31:49.181Z] 3768.25 IOPS, 14.72 MiB/s [2024-11-20T11:31:50.116Z] 3802.80 IOPS, 14.85 MiB/s [2024-11-20T11:31:51.051Z] 3820.17 IOPS, 14.92 MiB/s [2024-11-20T11:31:51.987Z] 3828.43 IOPS, 14.95 MiB/s [2024-11-20T11:31:52.923Z] 3843.00 IOPS, 15.01 MiB/s [2024-11-20T11:31:53.857Z] 3852.00 IOPS, 15.05 MiB/s [2024-11-20T11:31:53.857Z] 3863.30 IOPS, 15.09 MiB/s 00:19:08.268 Latency(us) 00:19:08.268 [2024-11-20T11:31:53.857Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:08.268 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:08.268 Verification LBA range: start 0x0 length 0x2000 00:19:08.269 TLSTESTn1 : 10.02 3869.69 15.12 0.00 0.00 33018.92 5242.88 32172.22 00:19:08.269 [2024-11-20T11:31:53.858Z] =================================================================================================================== 00:19:08.269 [2024-11-20T11:31:53.858Z] Total : 3869.69 15.12 0.00 0.00 33018.92 5242.88 32172.22 00:19:08.269 { 00:19:08.269 "results": [ 00:19:08.269 { 00:19:08.269 "job": "TLSTESTn1", 00:19:08.269 "core_mask": "0x4", 00:19:08.269 "workload": "verify", 00:19:08.269 "status": "finished", 00:19:08.269 "verify_range": { 00:19:08.269 "start": 0, 00:19:08.269 "length": 8192 00:19:08.269 }, 00:19:08.269 "queue_depth": 128, 00:19:08.269 "io_size": 4096, 00:19:08.269 "runtime": 10.016308, 00:19:08.269 "iops": 3869.689310672156, 00:19:08.269 "mibps": 15.115973869813109, 00:19:08.269 "io_failed": 0, 00:19:08.269 "io_timeout": 0, 00:19:08.269 "avg_latency_us": 33018.91544947931, 00:19:08.269 "min_latency_us": 5242.88, 00:19:08.269 "max_latency_us": 32172.21818181818 00:19:08.269 } 00:19:08.269 ], 00:19:08.269 "core_count": 1 00:19:08.269 } 00:19:08.269 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:19:08.269 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:19:08.269 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:19:08.269 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:19:08.269 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:19:08.269 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:08.269 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:19:08.269 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:19:08.269 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:19:08.269 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:08.269 nvmf_trace.0 00:19:08.527 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:19:08.527 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 84620 00:19:08.527 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 84620 ']' 00:19:08.527 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 84620 00:19:08.527 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:19:08.527 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:08.527 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84620 00:19:08.527 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:08.527 killing process with pid 84620 00:19:08.527 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:08.527 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84620' 00:19:08.527 Received shutdown signal, test time was about 10.000000 seconds 00:19:08.527 00:19:08.527 Latency(us) 00:19:08.527 [2024-11-20T11:31:54.116Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:08.527 [2024-11-20T11:31:54.116Z] =================================================================================================================== 00:19:08.527 [2024-11-20T11:31:54.116Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:08.527 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 84620 00:19:08.527 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 84620 00:19:08.527 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:19:08.527 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:08.527 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:19:08.527 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:08.527 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:19:08.527 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:08.527 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:08.527 rmmod nvme_tcp 00:19:08.786 rmmod nvme_fabrics 00:19:08.786 rmmod nvme_keyring 00:19:08.786 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:08.786 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:19:08.786 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:19:08.786 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 84585 ']' 00:19:08.786 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 84585 00:19:08.786 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 84585 ']' 00:19:08.786 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 84585 00:19:08.786 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:19:08.786 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:08.786 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84585 00:19:08.786 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:08.786 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:08.786 killing process with pid 84585 00:19:08.786 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84585' 00:19:08.786 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 84585 00:19:08.786 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 84585 00:19:08.786 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:08.786 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:08.786 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:08.786 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:19:08.786 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:19:08.786 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:08.786 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:19:08.786 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:08.786 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:08.786 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:08.786 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:09.045 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:09.045 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:09.045 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:09.045 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:09.045 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:09.045 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:09.045 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:09.045 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:09.045 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:09.045 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:09.045 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:09.045 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:09.045 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:09.045 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:09.045 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:09.045 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:19:09.045 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.0dd 00:19:09.045 00:19:09.045 real 0m14.368s 00:19:09.045 user 0m20.441s 00:19:09.045 sys 0m5.619s 00:19:09.045 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:09.045 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:09.045 ************************************ 00:19:09.045 END TEST nvmf_fips 00:19:09.045 ************************************ 00:19:09.305 11:31:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:09.305 11:31:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:09.305 11:31:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:09.305 11:31:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:09.305 ************************************ 00:19:09.305 START TEST nvmf_control_msg_list 00:19:09.305 ************************************ 00:19:09.305 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:09.305 * Looking for test storage... 00:19:09.305 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:09.305 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:09.305 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:19:09.305 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:09.305 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:09.305 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:09.305 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:09.305 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:09.305 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:19:09.305 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:19:09.305 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:19:09.305 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:19:09.305 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:19:09.305 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:19:09.305 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:19:09.305 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:09.305 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:19:09.305 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:19:09.305 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:09.305 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:09.305 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:19:09.305 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:19:09.305 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:09.305 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:19:09.305 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:19:09.305 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:19:09.305 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:19:09.305 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:09.305 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:19:09.305 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:19:09.305 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:09.305 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:09.305 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:19:09.305 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:09.305 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:09.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:09.305 --rc genhtml_branch_coverage=1 00:19:09.305 --rc genhtml_function_coverage=1 00:19:09.305 --rc genhtml_legend=1 00:19:09.305 --rc geninfo_all_blocks=1 00:19:09.305 --rc geninfo_unexecuted_blocks=1 00:19:09.305 00:19:09.305 ' 00:19:09.305 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:09.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:09.305 --rc genhtml_branch_coverage=1 00:19:09.305 --rc genhtml_function_coverage=1 00:19:09.305 --rc genhtml_legend=1 00:19:09.305 --rc geninfo_all_blocks=1 00:19:09.305 --rc geninfo_unexecuted_blocks=1 00:19:09.305 00:19:09.305 ' 00:19:09.305 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:09.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:09.305 --rc genhtml_branch_coverage=1 00:19:09.305 --rc genhtml_function_coverage=1 00:19:09.305 --rc genhtml_legend=1 00:19:09.306 --rc geninfo_all_blocks=1 00:19:09.306 --rc geninfo_unexecuted_blocks=1 00:19:09.306 00:19:09.306 ' 00:19:09.306 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:09.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:09.306 --rc genhtml_branch_coverage=1 00:19:09.306 --rc genhtml_function_coverage=1 00:19:09.306 --rc genhtml_legend=1 00:19:09.306 --rc geninfo_all_blocks=1 00:19:09.306 --rc geninfo_unexecuted_blocks=1 00:19:09.306 00:19:09.306 ' 00:19:09.306 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:09.306 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:19:09.306 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:09.306 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:09.306 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:09.306 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:09.306 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:09.306 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:09.306 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:09.306 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:09.306 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:09.306 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:09.306 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:19:09.306 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=806d442c-08ba-4719-b76d-33169283459a 00:19:09.306 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:09.306 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:09.306 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:09.306 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:09.306 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:09.306 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:19:09.306 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:09.306 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:09.306 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:09.306 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.306 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.306 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.306 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:19:09.306 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.306 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:19:09.306 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:09.306 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:09.306 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:09.306 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:09.306 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:09.306 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:09.306 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:09.306 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:09.306 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:09.306 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:09.306 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:19:09.306 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:09.306 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:09.306 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:09.306 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:09.306 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:09.306 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:09.306 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:09.306 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:09.306 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:09.306 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:09.306 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:09.306 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:09.306 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:09.306 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:09.306 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:09.306 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:09.306 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:09.306 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:09.306 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:09.306 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:09.306 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:09.306 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:09.306 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:09.306 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:09.306 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:09.306 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:09.306 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:09.306 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:09.306 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:09.306 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:09.306 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:09.306 Cannot find device "nvmf_init_br" 00:19:09.306 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:19:09.306 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:09.306 Cannot find device "nvmf_init_br2" 00:19:09.306 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:19:09.306 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:09.566 Cannot find device "nvmf_tgt_br" 00:19:09.566 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:19:09.566 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:09.566 Cannot find device "nvmf_tgt_br2" 00:19:09.566 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:19:09.566 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:09.566 Cannot find device "nvmf_init_br" 00:19:09.566 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:19:09.566 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:09.566 Cannot find device "nvmf_init_br2" 00:19:09.566 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:19:09.566 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:09.566 Cannot find device "nvmf_tgt_br" 00:19:09.566 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:19:09.566 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:09.566 Cannot find device "nvmf_tgt_br2" 00:19:09.566 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:19:09.566 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:09.566 Cannot find device "nvmf_br" 00:19:09.566 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:19:09.566 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:09.566 Cannot find device "nvmf_init_if" 00:19:09.566 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:19:09.566 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:09.566 Cannot find device "nvmf_init_if2" 00:19:09.566 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:19:09.566 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:09.566 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:09.566 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:19:09.566 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:09.566 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:09.566 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:19:09.566 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:09.566 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:09.566 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:09.566 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:09.566 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:09.566 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:09.566 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:09.566 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:09.566 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:09.566 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:09.566 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:09.566 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:09.566 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:09.566 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:09.566 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:09.566 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:09.566 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:09.566 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:09.566 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:09.566 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:09.566 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:09.566 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:09.566 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:09.825 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:09.825 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:09.825 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:09.825 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:09.825 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:09.825 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:09.825 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:09.825 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:09.825 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:09.825 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:09.825 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:09.825 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:19:09.825 00:19:09.825 --- 10.0.0.3 ping statistics --- 00:19:09.825 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:09.825 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:19:09.825 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:09.825 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:09.825 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:19:09.825 00:19:09.825 --- 10.0.0.4 ping statistics --- 00:19:09.825 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:09.825 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:19:09.825 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:09.825 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:09.825 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:19:09.825 00:19:09.825 --- 10.0.0.1 ping statistics --- 00:19:09.825 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:09.825 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:19:09.825 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:09.825 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:09.826 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:19:09.826 00:19:09.826 --- 10.0.0.2 ping statistics --- 00:19:09.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:09.826 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:19:09.826 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:09.826 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@461 -- # return 0 00:19:09.826 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:09.826 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:09.826 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:09.826 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:09.826 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:09.826 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:09.826 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:09.826 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:19:09.826 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:09.826 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:09.826 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:09.826 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=85039 00:19:09.826 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 85039 00:19:09.826 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:09.826 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 85039 ']' 00:19:09.826 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:09.826 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:09.826 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:09.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:09.826 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:09.826 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:09.826 [2024-11-20 11:31:55.347898] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:19:09.826 [2024-11-20 11:31:55.348005] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:10.084 [2024-11-20 11:31:55.499362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:10.084 [2024-11-20 11:31:55.548127] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:10.084 [2024-11-20 11:31:55.548210] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:10.084 [2024-11-20 11:31:55.548233] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:10.084 [2024-11-20 11:31:55.548249] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:10.084 [2024-11-20 11:31:55.548263] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:10.084 [2024-11-20 11:31:55.548759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:11.020 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:11.020 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:19:11.020 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:11.020 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:11.020 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:11.020 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:11.020 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:11.020 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:19:11.020 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:19:11.020 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.020 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:11.020 [2024-11-20 11:31:56.444891] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:11.020 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.020 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:19:11.020 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.020 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:11.020 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.020 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:11.020 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.020 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:11.020 Malloc0 00:19:11.020 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.020 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:11.020 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.020 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:11.020 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.020 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:19:11.020 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.020 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:11.020 [2024-11-20 11:31:56.491995] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:11.020 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.020 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=85090 00:19:11.020 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:19:11.020 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=85091 00:19:11.020 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:19:11.020 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=85092 00:19:11.020 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:19:11.020 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 85090 00:19:11.279 [2024-11-20 11:31:56.672277] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:11.279 [2024-11-20 11:31:56.682574] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:11.279 [2024-11-20 11:31:56.683096] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:12.215 Initializing NVMe Controllers 00:19:12.215 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:19:12.215 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:19:12.215 Initialization complete. Launching workers. 00:19:12.215 ======================================================== 00:19:12.215 Latency(us) 00:19:12.215 Device Information : IOPS MiB/s Average min max 00:19:12.215 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3232.99 12.63 308.96 133.34 849.40 00:19:12.215 ======================================================== 00:19:12.215 Total : 3232.99 12.63 308.96 133.34 849.40 00:19:12.215 00:19:12.215 Initializing NVMe Controllers 00:19:12.215 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:19:12.215 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:19:12.215 Initialization complete. Launching workers. 00:19:12.215 ======================================================== 00:19:12.215 Latency(us) 00:19:12.215 Device Information : IOPS MiB/s Average min max 00:19:12.215 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3213.00 12.55 310.89 212.90 849.12 00:19:12.215 ======================================================== 00:19:12.215 Total : 3213.00 12.55 310.89 212.90 849.12 00:19:12.215 00:19:12.215 Initializing NVMe Controllers 00:19:12.215 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:19:12.215 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:19:12.215 Initialization complete. Launching workers. 00:19:12.215 ======================================================== 00:19:12.215 Latency(us) 00:19:12.215 Device Information : IOPS MiB/s Average min max 00:19:12.215 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3213.00 12.55 310.82 213.78 851.40 00:19:12.215 ======================================================== 00:19:12.215 Total : 3213.00 12.55 310.82 213.78 851.40 00:19:12.215 00:19:12.215 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 85091 00:19:12.215 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 85092 00:19:12.215 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:12.215 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:19:12.215 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:12.215 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:19:12.215 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:12.215 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:19:12.215 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:12.215 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:12.215 rmmod nvme_tcp 00:19:12.215 rmmod nvme_fabrics 00:19:12.215 rmmod nvme_keyring 00:19:12.473 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:12.473 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:19:12.473 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:19:12.473 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 85039 ']' 00:19:12.473 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 85039 00:19:12.473 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 85039 ']' 00:19:12.473 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 85039 00:19:12.473 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:19:12.473 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:12.473 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85039 00:19:12.473 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:12.473 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:12.473 killing process with pid 85039 00:19:12.473 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85039' 00:19:12.473 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 85039 00:19:12.473 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 85039 00:19:12.473 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:12.473 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:12.473 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:12.474 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:19:12.474 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:19:12.474 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:19:12.474 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:12.474 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:12.474 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:12.474 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:12.474 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:12.474 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:12.474 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:12.733 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:12.733 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:12.733 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:12.733 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:12.733 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:12.733 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:12.733 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:12.733 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:12.733 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:12.733 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:12.733 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:12.733 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:12.733 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:12.733 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:19:12.733 00:19:12.733 real 0m3.611s 00:19:12.733 user 0m5.636s 00:19:12.733 sys 0m1.370s 00:19:12.733 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:12.733 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:12.733 ************************************ 00:19:12.733 END TEST nvmf_control_msg_list 00:19:12.733 ************************************ 00:19:12.733 11:31:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:12.733 11:31:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:12.733 11:31:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:12.733 11:31:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:12.733 ************************************ 00:19:12.733 START TEST nvmf_wait_for_buf 00:19:12.733 ************************************ 00:19:12.733 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:12.993 * Looking for test storage... 00:19:12.993 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:12.993 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:12.993 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:12.993 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:19:12.993 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:12.993 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:12.993 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:12.993 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:12.993 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:19:12.993 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:19:12.993 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:19:12.993 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:19:12.993 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:19:12.993 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:19:12.993 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:19:12.993 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:12.993 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:19:12.993 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:19:12.993 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:12.993 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:12.993 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:19:12.993 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:19:12.993 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:12.993 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:19:12.993 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:19:12.993 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:19:12.993 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:19:12.993 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:12.993 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:19:12.993 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:19:12.993 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:12.993 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:12.993 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:19:12.993 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:12.993 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:12.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:12.993 --rc genhtml_branch_coverage=1 00:19:12.993 --rc genhtml_function_coverage=1 00:19:12.993 --rc genhtml_legend=1 00:19:12.993 --rc geninfo_all_blocks=1 00:19:12.993 --rc geninfo_unexecuted_blocks=1 00:19:12.993 00:19:12.993 ' 00:19:12.993 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:12.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:12.993 --rc genhtml_branch_coverage=1 00:19:12.993 --rc genhtml_function_coverage=1 00:19:12.993 --rc genhtml_legend=1 00:19:12.993 --rc geninfo_all_blocks=1 00:19:12.993 --rc geninfo_unexecuted_blocks=1 00:19:12.993 00:19:12.993 ' 00:19:12.993 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:12.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:12.993 --rc genhtml_branch_coverage=1 00:19:12.993 --rc genhtml_function_coverage=1 00:19:12.993 --rc genhtml_legend=1 00:19:12.993 --rc geninfo_all_blocks=1 00:19:12.993 --rc geninfo_unexecuted_blocks=1 00:19:12.993 00:19:12.993 ' 00:19:12.993 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:12.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:12.993 --rc genhtml_branch_coverage=1 00:19:12.993 --rc genhtml_function_coverage=1 00:19:12.993 --rc genhtml_legend=1 00:19:12.993 --rc geninfo_all_blocks=1 00:19:12.993 --rc geninfo_unexecuted_blocks=1 00:19:12.993 00:19:12.993 ' 00:19:12.993 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:12.993 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:19:12.993 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:12.993 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:12.994 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:12.994 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:12.994 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:12.994 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:12.994 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:12.994 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:12.994 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:12.994 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:12.994 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:19:12.994 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=806d442c-08ba-4719-b76d-33169283459a 00:19:12.994 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:12.994 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:12.994 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:12.994 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:12.994 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:12.994 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:19:12.994 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:12.994 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:12.994 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:12.994 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:12.994 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:12.994 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:12.994 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:19:12.994 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:12.994 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:19:12.994 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:12.994 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:12.994 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:12.994 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:12.994 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:12.994 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:12.994 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:12.994 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:12.994 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:12.994 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:12.994 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:19:12.994 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:12.994 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:12.994 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:12.994 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:12.994 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:12.994 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:12.994 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:12.994 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:12.994 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:12.994 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:12.994 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:12.994 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:12.994 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:12.994 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:12.994 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:12.994 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:12.994 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:12.994 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:12.994 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:12.994 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:12.994 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:12.994 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:12.994 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:12.994 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:12.994 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:12.994 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:12.994 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:12.994 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:12.994 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:12.994 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:12.994 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:12.994 Cannot find device "nvmf_init_br" 00:19:12.994 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:19:12.994 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:12.994 Cannot find device "nvmf_init_br2" 00:19:12.994 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:19:12.994 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:12.994 Cannot find device "nvmf_tgt_br" 00:19:12.994 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:19:12.994 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:12.994 Cannot find device "nvmf_tgt_br2" 00:19:12.994 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:19:12.994 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:13.256 Cannot find device "nvmf_init_br" 00:19:13.256 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:19:13.256 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:13.256 Cannot find device "nvmf_init_br2" 00:19:13.256 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:19:13.256 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:13.256 Cannot find device "nvmf_tgt_br" 00:19:13.256 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:19:13.256 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:13.256 Cannot find device "nvmf_tgt_br2" 00:19:13.256 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:19:13.256 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:13.256 Cannot find device "nvmf_br" 00:19:13.256 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:19:13.256 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:13.256 Cannot find device "nvmf_init_if" 00:19:13.256 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:19:13.256 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:13.256 Cannot find device "nvmf_init_if2" 00:19:13.256 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:19:13.256 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:13.256 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:13.256 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:19:13.256 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:13.256 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:13.256 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:19:13.256 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:13.256 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:13.256 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:13.256 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:13.256 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:13.256 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:13.256 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:13.256 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:13.256 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:13.256 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:13.256 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:13.256 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:13.256 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:13.256 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:13.256 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:13.256 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:13.256 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:13.256 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:13.256 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:13.256 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:13.256 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:13.256 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:13.257 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:13.257 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:13.257 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:13.257 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:13.516 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:13.516 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:13.516 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:13.516 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:13.516 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:13.516 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:13.516 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:13.516 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:13.516 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:19:13.516 00:19:13.516 --- 10.0.0.3 ping statistics --- 00:19:13.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:13.516 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:19:13.516 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:13.516 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:13.516 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:19:13.516 00:19:13.516 --- 10.0.0.4 ping statistics --- 00:19:13.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:13.516 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:19:13.516 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:13.516 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:13.516 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:19:13.516 00:19:13.516 --- 10.0.0.1 ping statistics --- 00:19:13.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:13.516 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:19:13.516 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:13.516 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:13.516 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:19:13.516 00:19:13.516 --- 10.0.0.2 ping statistics --- 00:19:13.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:13.516 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:19:13.516 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:13.516 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@461 -- # return 0 00:19:13.516 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:13.516 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:13.516 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:13.516 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:13.516 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:13.516 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:13.516 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:13.516 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:19:13.516 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:13.516 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:13.516 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:13.516 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=85332 00:19:13.516 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:13.516 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 85332 00:19:13.516 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 85332 ']' 00:19:13.516 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:13.516 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:13.516 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:13.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:13.517 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:13.517 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:13.517 [2024-11-20 11:31:58.978309] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:19:13.517 [2024-11-20 11:31:58.978409] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:13.782 [2024-11-20 11:31:59.131353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:13.782 [2024-11-20 11:31:59.168218] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:13.782 [2024-11-20 11:31:59.168276] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:13.782 [2024-11-20 11:31:59.168288] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:13.782 [2024-11-20 11:31:59.168298] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:13.782 [2024-11-20 11:31:59.168307] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:13.782 [2024-11-20 11:31:59.168684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:13.782 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:13.782 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:19:13.782 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:13.782 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:13.782 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:13.782 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:13.782 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:13.782 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:19:13.782 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:19:13.782 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.782 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:13.782 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.782 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:19:13.782 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.782 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:13.782 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.782 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:19:13.782 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.782 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:13.782 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.782 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:13.782 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.782 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:13.782 Malloc0 00:19:13.782 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.782 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:19:13.782 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.782 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:14.042 [2024-11-20 11:31:59.371435] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:14.042 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.042 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:19:14.042 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.042 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:14.042 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.042 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:14.042 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.042 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:14.042 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.042 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:19:14.042 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.042 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:14.042 [2024-11-20 11:31:59.399467] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:14.042 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.042 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:19:14.042 [2024-11-20 11:31:59.610824] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:15.417 Initializing NVMe Controllers 00:19:15.417 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:19:15.417 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:19:15.417 Initialization complete. Launching workers. 00:19:15.417 ======================================================== 00:19:15.417 Latency(us) 00:19:15.417 Device Information : IOPS MiB/s Average min max 00:19:15.418 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 124.00 15.50 33612.05 8022.09 64006.73 00:19:15.418 ======================================================== 00:19:15.418 Total : 124.00 15.50 33612.05 8022.09 64006.73 00:19:15.418 00:19:15.418 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:19:15.418 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:19:15.418 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.418 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:15.418 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.677 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=1958 00:19:15.677 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 1958 -eq 0 ]] 00:19:15.677 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:15.677 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:19:15.677 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:15.677 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:19:15.677 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:15.677 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:19:15.677 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:15.677 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:15.677 rmmod nvme_tcp 00:19:15.677 rmmod nvme_fabrics 00:19:15.677 rmmod nvme_keyring 00:19:15.677 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:15.677 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:19:15.677 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:19:15.677 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 85332 ']' 00:19:15.677 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 85332 00:19:15.677 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 85332 ']' 00:19:15.677 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 85332 00:19:15.677 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:19:15.677 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:15.677 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85332 00:19:15.677 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:15.677 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:15.677 killing process with pid 85332 00:19:15.677 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85332' 00:19:15.677 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 85332 00:19:15.677 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 85332 00:19:15.935 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:15.935 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:15.935 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:15.935 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:19:15.935 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:19:15.935 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:19:15.935 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:15.935 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:15.935 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:15.935 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:15.935 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:15.935 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:15.935 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:15.935 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:15.935 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:15.935 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:15.935 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:15.935 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:15.935 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:15.935 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:15.935 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:15.935 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:15.935 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:15.935 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:15.936 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:15.936 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:16.195 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:19:16.195 00:19:16.195 real 0m3.241s 00:19:16.195 user 0m2.698s 00:19:16.195 sys 0m0.690s 00:19:16.195 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:16.195 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:16.195 ************************************ 00:19:16.195 END TEST nvmf_wait_for_buf 00:19:16.195 ************************************ 00:19:16.195 11:32:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:19:16.195 11:32:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:19:16.195 11:32:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:19:16.195 11:32:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:16.195 11:32:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:16.195 11:32:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:16.195 ************************************ 00:19:16.195 START TEST nvmf_nsid 00:19:16.195 ************************************ 00:19:16.195 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:19:16.195 * Looking for test storage... 00:19:16.195 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:16.195 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:16.195 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:16.195 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:19:16.195 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:16.195 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:16.195 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:16.195 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:16.195 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:19:16.195 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:19:16.195 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:19:16.195 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:19:16.195 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:19:16.195 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:19:16.195 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:19:16.195 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:16.195 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:19:16.196 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:19:16.196 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:16.196 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:16.455 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:19:16.455 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:19:16.455 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:16.455 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:19:16.455 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:19:16.455 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:19:16.455 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:19:16.455 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:16.455 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:19:16.455 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:19:16.455 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:16.455 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:16.455 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:19:16.455 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:16.455 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:16.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:16.455 --rc genhtml_branch_coverage=1 00:19:16.455 --rc genhtml_function_coverage=1 00:19:16.455 --rc genhtml_legend=1 00:19:16.455 --rc geninfo_all_blocks=1 00:19:16.455 --rc geninfo_unexecuted_blocks=1 00:19:16.455 00:19:16.455 ' 00:19:16.455 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:16.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:16.455 --rc genhtml_branch_coverage=1 00:19:16.455 --rc genhtml_function_coverage=1 00:19:16.455 --rc genhtml_legend=1 00:19:16.455 --rc geninfo_all_blocks=1 00:19:16.455 --rc geninfo_unexecuted_blocks=1 00:19:16.455 00:19:16.455 ' 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:16.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:16.456 --rc genhtml_branch_coverage=1 00:19:16.456 --rc genhtml_function_coverage=1 00:19:16.456 --rc genhtml_legend=1 00:19:16.456 --rc geninfo_all_blocks=1 00:19:16.456 --rc geninfo_unexecuted_blocks=1 00:19:16.456 00:19:16.456 ' 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:16.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:16.456 --rc genhtml_branch_coverage=1 00:19:16.456 --rc genhtml_function_coverage=1 00:19:16.456 --rc genhtml_legend=1 00:19:16.456 --rc geninfo_all_blocks=1 00:19:16.456 --rc geninfo_unexecuted_blocks=1 00:19:16.456 00:19:16.456 ' 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=806d442c-08ba-4719-b76d-33169283459a 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:16.456 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:16.456 Cannot find device "nvmf_init_br" 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # true 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:16.456 Cannot find device "nvmf_init_br2" 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # true 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:16.456 Cannot find device "nvmf_tgt_br" 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # true 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:16.456 Cannot find device "nvmf_tgt_br2" 00:19:16.456 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # true 00:19:16.457 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:16.457 Cannot find device "nvmf_init_br" 00:19:16.457 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # true 00:19:16.457 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:16.457 Cannot find device "nvmf_init_br2" 00:19:16.457 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # true 00:19:16.457 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:16.457 Cannot find device "nvmf_tgt_br" 00:19:16.457 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # true 00:19:16.457 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:16.457 Cannot find device "nvmf_tgt_br2" 00:19:16.457 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # true 00:19:16.457 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:16.457 Cannot find device "nvmf_br" 00:19:16.457 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # true 00:19:16.457 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:16.457 Cannot find device "nvmf_init_if" 00:19:16.457 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # true 00:19:16.457 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:16.457 Cannot find device "nvmf_init_if2" 00:19:16.457 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # true 00:19:16.457 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:16.457 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:16.457 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # true 00:19:16.457 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:16.457 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:16.457 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # true 00:19:16.457 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:16.457 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:16.457 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:16.457 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:16.457 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:16.457 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:16.716 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:16.716 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:16.716 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:16.716 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:16.716 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:16.716 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:16.716 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:16.716 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:16.716 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:16.716 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:16.716 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:16.716 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:16.716 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:16.716 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:16.716 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:16.716 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:16.716 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:16.716 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:16.716 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:16.716 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:16.716 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:16.716 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:16.716 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:16.716 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:16.716 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:16.716 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:16.716 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:16.716 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:16.716 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:19:16.716 00:19:16.716 --- 10.0.0.3 ping statistics --- 00:19:16.716 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:16.716 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:19:16.716 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:16.716 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:16.716 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.061 ms 00:19:16.716 00:19:16.716 --- 10.0.0.4 ping statistics --- 00:19:16.716 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:16.716 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:19:16.716 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:16.716 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:16.716 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:19:16.716 00:19:16.717 --- 10.0.0.1 ping statistics --- 00:19:16.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:16.717 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:19:16.717 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:16.717 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:16.717 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:19:16.717 00:19:16.717 --- 10.0.0.2 ping statistics --- 00:19:16.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:16.717 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:19:16.717 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:16.717 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@461 -- # return 0 00:19:16.717 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:16.717 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:16.717 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:16.717 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:16.717 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:16.717 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:16.717 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:16.717 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:19:16.717 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:16.717 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:16.717 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:19:16.717 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=85607 00:19:16.717 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 85607 00:19:16.717 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:19:16.717 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 85607 ']' 00:19:16.717 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:16.717 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:16.717 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:16.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:16.717 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:16.717 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:19:16.975 [2024-11-20 11:32:02.324776] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:19:16.975 [2024-11-20 11:32:02.324861] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:16.975 [2024-11-20 11:32:02.477396] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:16.975 [2024-11-20 11:32:02.514657] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:16.975 [2024-11-20 11:32:02.514720] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:16.975 [2024-11-20 11:32:02.514734] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:16.975 [2024-11-20 11:32:02.514745] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:16.975 [2024-11-20 11:32:02.514754] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:16.975 [2024-11-20 11:32:02.515103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:17.234 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:17.234 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:19:17.234 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:17.234 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:17.234 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:19:17.234 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:17.234 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:17.234 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=85639 00:19:17.234 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:19:17.234 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.3 00:19:17.234 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:19:17.234 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:19:17.234 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:17.234 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:17.234 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:17.234 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:17.234 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:17.234 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:17.234 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:17.234 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:17.234 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:17.234 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:19:17.234 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:19:17.234 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=7553f6d7-1222-4264-886e-3b2aa162e25f 00:19:17.234 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:19:17.234 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=211656be-2928-4dfa-a6ed-04c379770677 00:19:17.234 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:19:17.234 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=27f6d28f-9eb2-46cb-9d12-12d2b5cfc583 00:19:17.234 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:19:17.234 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.234 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:19:17.234 null0 00:19:17.234 null1 00:19:17.234 null2 00:19:17.234 [2024-11-20 11:32:02.692340] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:17.234 [2024-11-20 11:32:02.701046] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:19:17.234 [2024-11-20 11:32:02.701133] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85639 ] 00:19:17.234 [2024-11-20 11:32:02.716467] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:17.234 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.234 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 85639 /var/tmp/tgt2.sock 00:19:17.234 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 85639 ']' 00:19:17.234 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:19:17.234 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:17.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:19:17.235 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:19:17.235 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:17.235 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:19:17.493 [2024-11-20 11:32:02.850585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:17.493 [2024-11-20 11:32:02.883537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:17.751 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:17.751 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:19:17.751 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:19:18.321 [2024-11-20 11:32:03.599041] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:18.321 [2024-11-20 11:32:03.615180] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:19:18.321 nvme0n1 nvme0n2 00:19:18.321 nvme1n1 00:19:18.321 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:19:18.321 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:19:18.321 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid=806d442c-08ba-4719-b76d-33169283459a 00:19:18.321 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:19:18.321 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:19:18.321 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:19:18.321 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:19:18.321 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:19:18.321 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:19:18.321 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:19:18.321 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:19:18.321 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:19:18.321 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:19:18.321 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:19:18.321 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:19:18.321 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:19:19.269 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:19:19.269 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:19:19.269 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:19:19.269 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:19:19.269 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:19:19.269 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 7553f6d7-1222-4264-886e-3b2aa162e25f 00:19:19.269 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:19:19.269 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:19:19.269 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:19:19.269 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:19:19.269 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:19:19.528 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=7553f6d712224264886e3b2aa162e25f 00:19:19.528 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 7553F6D712224264886E3B2AA162E25F 00:19:19.528 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 7553F6D712224264886E3B2AA162E25F == \7\5\5\3\F\6\D\7\1\2\2\2\4\2\6\4\8\8\6\E\3\B\2\A\A\1\6\2\E\2\5\F ]] 00:19:19.528 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:19:19.528 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:19:19.528 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:19:19.528 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:19:19.528 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:19:19.528 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:19:19.528 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:19:19.528 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 211656be-2928-4dfa-a6ed-04c379770677 00:19:19.528 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:19:19.528 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:19:19.528 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:19:19.528 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:19:19.528 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:19:19.528 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=211656be29284dfaa6ed04c379770677 00:19:19.528 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 211656BE29284DFAA6ED04C379770677 00:19:19.528 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 211656BE29284DFAA6ED04C379770677 == \2\1\1\6\5\6\B\E\2\9\2\8\4\D\F\A\A\6\E\D\0\4\C\3\7\9\7\7\0\6\7\7 ]] 00:19:19.528 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:19:19.528 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:19:19.528 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:19:19.528 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:19:19.528 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:19:19.528 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:19:19.528 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:19:19.528 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 27f6d28f-9eb2-46cb-9d12-12d2b5cfc583 00:19:19.528 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:19:19.528 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:19:19.528 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:19:19.528 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:19:19.528 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:19:19.528 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=27f6d28f9eb246cb9d1212d2b5cfc583 00:19:19.528 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 27F6D28F9EB246CB9D1212D2B5CFC583 00:19:19.528 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 27F6D28F9EB246CB9D1212D2B5CFC583 == \2\7\F\6\D\2\8\F\9\E\B\2\4\6\C\B\9\D\1\2\1\2\D\2\B\5\C\F\C\5\8\3 ]] 00:19:19.528 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:19:19.788 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:19:19.788 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:19:19.788 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 85639 00:19:19.788 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 85639 ']' 00:19:19.788 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 85639 00:19:19.788 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:19:19.788 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:19.788 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85639 00:19:19.788 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:19.788 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:19.788 killing process with pid 85639 00:19:19.788 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85639' 00:19:19.788 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 85639 00:19:19.788 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 85639 00:19:20.047 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:19:20.047 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:20.047 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:19:20.047 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:20.047 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:19:20.047 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:20.047 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:20.047 rmmod nvme_tcp 00:19:20.047 rmmod nvme_fabrics 00:19:20.047 rmmod nvme_keyring 00:19:20.047 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:20.047 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:19:20.047 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:19:20.047 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 85607 ']' 00:19:20.047 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 85607 00:19:20.047 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 85607 ']' 00:19:20.047 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 85607 00:19:20.047 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:19:20.047 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:20.047 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85607 00:19:20.047 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:20.047 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:20.047 killing process with pid 85607 00:19:20.047 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85607' 00:19:20.047 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 85607 00:19:20.047 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 85607 00:19:20.306 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:20.306 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:20.306 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:20.306 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:19:20.306 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:19:20.306 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:20.306 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:19:20.306 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:20.306 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:20.306 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:20.306 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:20.306 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:20.306 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:20.306 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:20.306 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:20.306 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:20.306 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:20.306 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:20.306 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:20.565 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:20.565 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:20.566 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:20.566 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:20.566 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:20.566 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:20.566 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:20.566 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@300 -- # return 0 00:19:20.566 ************************************ 00:19:20.566 END TEST nvmf_nsid 00:19:20.566 ************************************ 00:19:20.566 00:19:20.566 real 0m4.383s 00:19:20.566 user 0m7.001s 00:19:20.566 sys 0m1.207s 00:19:20.566 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:20.566 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:19:20.566 11:32:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:19:20.566 00:19:20.566 real 7m35.253s 00:19:20.566 user 18m26.460s 00:19:20.566 sys 1m24.390s 00:19:20.566 11:32:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:20.566 11:32:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:20.566 ************************************ 00:19:20.566 END TEST nvmf_target_extra 00:19:20.566 ************************************ 00:19:20.566 11:32:06 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:19:20.566 11:32:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:20.566 11:32:06 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:20.566 11:32:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:20.566 ************************************ 00:19:20.566 START TEST nvmf_host 00:19:20.566 ************************************ 00:19:20.566 11:32:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:19:20.566 * Looking for test storage... 00:19:20.825 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:19:20.825 11:32:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:20.825 11:32:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:19:20.825 11:32:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:20.825 11:32:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:20.825 11:32:06 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:20.825 11:32:06 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:20.825 11:32:06 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:20.825 11:32:06 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:19:20.825 11:32:06 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:19:20.825 11:32:06 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:19:20.825 11:32:06 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:19:20.825 11:32:06 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:19:20.825 11:32:06 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:19:20.825 11:32:06 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:19:20.825 11:32:06 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:20.825 11:32:06 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:19:20.825 11:32:06 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:19:20.825 11:32:06 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:20.825 11:32:06 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:20.826 11:32:06 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:19:20.826 11:32:06 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:19:20.826 11:32:06 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:20.826 11:32:06 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:19:20.826 11:32:06 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:19:20.826 11:32:06 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:19:20.826 11:32:06 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:19:20.826 11:32:06 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:20.826 11:32:06 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:19:20.826 11:32:06 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:19:20.826 11:32:06 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:20.826 11:32:06 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:20.826 11:32:06 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:19:20.826 11:32:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:20.826 11:32:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:20.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:20.826 --rc genhtml_branch_coverage=1 00:19:20.826 --rc genhtml_function_coverage=1 00:19:20.826 --rc genhtml_legend=1 00:19:20.826 --rc geninfo_all_blocks=1 00:19:20.826 --rc geninfo_unexecuted_blocks=1 00:19:20.826 00:19:20.826 ' 00:19:20.826 11:32:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:20.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:20.826 --rc genhtml_branch_coverage=1 00:19:20.826 --rc genhtml_function_coverage=1 00:19:20.826 --rc genhtml_legend=1 00:19:20.826 --rc geninfo_all_blocks=1 00:19:20.826 --rc geninfo_unexecuted_blocks=1 00:19:20.826 00:19:20.826 ' 00:19:20.826 11:32:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:20.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:20.826 --rc genhtml_branch_coverage=1 00:19:20.826 --rc genhtml_function_coverage=1 00:19:20.826 --rc genhtml_legend=1 00:19:20.826 --rc geninfo_all_blocks=1 00:19:20.826 --rc geninfo_unexecuted_blocks=1 00:19:20.826 00:19:20.826 ' 00:19:20.826 11:32:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:20.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:20.826 --rc genhtml_branch_coverage=1 00:19:20.826 --rc genhtml_function_coverage=1 00:19:20.826 --rc genhtml_legend=1 00:19:20.826 --rc geninfo_all_blocks=1 00:19:20.826 --rc geninfo_unexecuted_blocks=1 00:19:20.826 00:19:20.826 ' 00:19:20.826 11:32:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:20.826 11:32:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:19:20.826 11:32:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:20.826 11:32:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:20.826 11:32:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:20.826 11:32:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:20.826 11:32:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:20.826 11:32:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:20.826 11:32:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:20.826 11:32:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:20.826 11:32:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:20.826 11:32:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:20.826 11:32:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:19:20.826 11:32:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=806d442c-08ba-4719-b76d-33169283459a 00:19:20.826 11:32:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:20.826 11:32:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:20.826 11:32:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:20.826 11:32:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:20.826 11:32:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:20.826 11:32:06 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:19:20.826 11:32:06 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:20.826 11:32:06 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:20.826 11:32:06 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:20.826 11:32:06 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.826 11:32:06 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.826 11:32:06 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.826 11:32:06 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:19:20.826 11:32:06 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.826 11:32:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:19:20.826 11:32:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:20.826 11:32:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:20.826 11:32:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:20.826 11:32:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:20.826 11:32:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:20.826 11:32:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:20.826 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:20.826 11:32:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:20.826 11:32:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:20.826 11:32:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:20.826 11:32:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:19:20.826 11:32:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:19:20.826 11:32:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:19:20.826 11:32:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:19:20.826 11:32:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:20.826 11:32:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:20.827 11:32:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:20.827 ************************************ 00:19:20.827 START TEST nvmf_multicontroller 00:19:20.827 ************************************ 00:19:20.827 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:19:20.827 * Looking for test storage... 00:19:20.827 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:20.827 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:20.827 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:20.827 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:19:21.087 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:21.087 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:21.087 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:21.087 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:21.087 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:19:21.087 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:19:21.087 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:19:21.087 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:19:21.087 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:19:21.087 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:19:21.087 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:19:21.087 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:21.087 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:19:21.087 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:19:21.087 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:21.087 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:21.087 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:19:21.087 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:19:21.087 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:21.087 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:19:21.087 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:19:21.087 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:19:21.087 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:19:21.087 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:21.087 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:19:21.087 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:19:21.087 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:21.087 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:21.087 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:19:21.087 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:21.087 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:21.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:21.087 --rc genhtml_branch_coverage=1 00:19:21.087 --rc genhtml_function_coverage=1 00:19:21.087 --rc genhtml_legend=1 00:19:21.087 --rc geninfo_all_blocks=1 00:19:21.087 --rc geninfo_unexecuted_blocks=1 00:19:21.087 00:19:21.087 ' 00:19:21.087 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:21.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:21.087 --rc genhtml_branch_coverage=1 00:19:21.087 --rc genhtml_function_coverage=1 00:19:21.087 --rc genhtml_legend=1 00:19:21.087 --rc geninfo_all_blocks=1 00:19:21.087 --rc geninfo_unexecuted_blocks=1 00:19:21.087 00:19:21.087 ' 00:19:21.087 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:21.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:21.087 --rc genhtml_branch_coverage=1 00:19:21.087 --rc genhtml_function_coverage=1 00:19:21.087 --rc genhtml_legend=1 00:19:21.087 --rc geninfo_all_blocks=1 00:19:21.087 --rc geninfo_unexecuted_blocks=1 00:19:21.087 00:19:21.087 ' 00:19:21.087 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:21.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:21.087 --rc genhtml_branch_coverage=1 00:19:21.087 --rc genhtml_function_coverage=1 00:19:21.087 --rc genhtml_legend=1 00:19:21.087 --rc geninfo_all_blocks=1 00:19:21.087 --rc geninfo_unexecuted_blocks=1 00:19:21.087 00:19:21.087 ' 00:19:21.087 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:21.087 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:19:21.087 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:21.087 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:21.087 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:21.087 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:21.087 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:21.087 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:21.087 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:21.087 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:21.087 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:21.087 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:21.087 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:19:21.087 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=806d442c-08ba-4719-b76d-33169283459a 00:19:21.087 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:21.087 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:21.087 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:21.087 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:21.087 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:21.087 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:19:21.087 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:21.087 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:21.087 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:21.087 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.087 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.088 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.088 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:19:21.088 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.088 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:19:21.088 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:21.088 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:21.088 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:21.088 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:21.088 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:21.088 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:21.088 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:21.088 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:21.088 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:21.088 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:21.088 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:21.088 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:21.088 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:19:21.088 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:19:21.088 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:21.088 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:19:21.088 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:19:21.088 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:21.088 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:21.088 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:21.088 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:21.088 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:21.088 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:21.088 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:21.088 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:21.088 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:21.088 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:21.088 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:21.088 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:21.088 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:21.088 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:21.088 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:21.088 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:21.088 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:21.088 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:21.088 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:21.088 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:21.088 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:21.088 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:21.088 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:21.088 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:21.088 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:21.088 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:21.088 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:21.088 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:21.088 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:21.088 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:21.088 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:21.088 Cannot find device "nvmf_init_br" 00:19:21.088 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@162 -- # true 00:19:21.088 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:21.088 Cannot find device "nvmf_init_br2" 00:19:21.088 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@163 -- # true 00:19:21.088 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:21.088 Cannot find device "nvmf_tgt_br" 00:19:21.088 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@164 -- # true 00:19:21.088 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:21.088 Cannot find device "nvmf_tgt_br2" 00:19:21.088 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@165 -- # true 00:19:21.088 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:21.088 Cannot find device "nvmf_init_br" 00:19:21.088 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@166 -- # true 00:19:21.088 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:21.088 Cannot find device "nvmf_init_br2" 00:19:21.088 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@167 -- # true 00:19:21.088 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:21.088 Cannot find device "nvmf_tgt_br" 00:19:21.088 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@168 -- # true 00:19:21.088 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:21.088 Cannot find device "nvmf_tgt_br2" 00:19:21.088 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@169 -- # true 00:19:21.088 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:21.088 Cannot find device "nvmf_br" 00:19:21.088 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@170 -- # true 00:19:21.088 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:21.088 Cannot find device "nvmf_init_if" 00:19:21.088 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@171 -- # true 00:19:21.088 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:21.088 Cannot find device "nvmf_init_if2" 00:19:21.088 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@172 -- # true 00:19:21.088 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:21.088 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:21.088 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@173 -- # true 00:19:21.088 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:21.088 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:21.088 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@174 -- # true 00:19:21.088 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:21.088 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:21.088 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:21.348 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:21.348 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:21.348 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:21.348 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:21.348 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:21.348 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:21.348 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:21.348 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:21.348 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:21.348 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:21.348 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:21.348 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:21.348 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:21.348 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:21.348 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:21.348 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:21.348 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:21.348 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:21.348 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:21.348 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:21.348 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:21.348 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:21.348 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:21.348 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:21.348 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:21.348 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:21.348 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:21.349 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:21.349 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:21.349 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:21.349 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:21.349 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:19:21.349 00:19:21.349 --- 10.0.0.3 ping statistics --- 00:19:21.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:21.349 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:19:21.349 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:21.349 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:21.349 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.124 ms 00:19:21.349 00:19:21.349 --- 10.0.0.4 ping statistics --- 00:19:21.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:21.349 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:19:21.349 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:21.349 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:21.349 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:19:21.349 00:19:21.349 --- 10.0.0.1 ping statistics --- 00:19:21.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:21.349 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:19:21.349 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:21.349 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:21.349 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:19:21.349 00:19:21.349 --- 10.0.0.2 ping statistics --- 00:19:21.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:21.349 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:19:21.349 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:21.349 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@461 -- # return 0 00:19:21.349 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:21.349 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:21.349 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:21.349 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:21.349 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:21.349 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:21.349 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:21.349 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:19:21.349 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:21.349 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:21.349 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:21.349 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=86002 00:19:21.349 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 86002 00:19:21.349 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 86002 ']' 00:19:21.349 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:21.349 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:21.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:21.349 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:19:21.349 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:21.349 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:21.349 11:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:21.608 [2024-11-20 11:32:06.998171] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:19:21.608 [2024-11-20 11:32:06.998277] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:21.608 [2024-11-20 11:32:07.153105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:21.608 [2024-11-20 11:32:07.192989] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:21.608 [2024-11-20 11:32:07.193049] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:21.608 [2024-11-20 11:32:07.193062] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:21.608 [2024-11-20 11:32:07.193073] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:21.608 [2024-11-20 11:32:07.193082] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:21.608 [2024-11-20 11:32:07.193914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:21.608 [2024-11-20 11:32:07.194009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:21.608 [2024-11-20 11:32:07.194017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:21.867 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:21.867 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:19:21.867 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:21.867 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:21.867 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:21.867 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:21.867 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:21.868 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.868 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:21.868 [2024-11-20 11:32:07.331106] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:21.868 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.868 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:21.868 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.868 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:21.868 Malloc0 00:19:21.868 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.868 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:21.868 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.868 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:21.868 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.868 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:21.868 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.868 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:21.868 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.868 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:21.868 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.868 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:21.868 [2024-11-20 11:32:07.387612] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:21.868 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.868 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:21.868 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.868 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:21.868 [2024-11-20 11:32:07.395546] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:21.868 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.868 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:21.868 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.868 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:21.868 Malloc1 00:19:21.868 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.868 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:19:21.868 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.868 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:21.868 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.868 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:19:21.868 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.868 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:21.868 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.868 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:19:21.868 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.868 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:21.868 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.868 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4421 00:19:21.868 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.868 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:22.128 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.128 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=86046 00:19:22.128 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:19:22.129 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:22.129 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 86046 /var/tmp/bdevperf.sock 00:19:22.129 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 86046 ']' 00:19:22.129 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:22.129 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:22.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:22.129 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:22.129 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:22.129 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:22.388 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:22.388 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:19:22.388 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:19:22.388 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.388 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:22.388 NVMe0n1 00:19:22.388 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.388 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:22.388 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.388 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:19:22.388 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:22.388 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.388 1 00:19:22.388 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:19:22.388 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:19:22.388 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:19:22.388 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:22.388 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:22.388 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:22.388 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:22.388 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:19:22.388 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.388 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:22.388 2024/11/20 11:32:07 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 hostnqn:nqn.2021-09-7.io.spdk:00001 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:19:22.388 request: 00:19:22.388 { 00:19:22.388 "method": "bdev_nvme_attach_controller", 00:19:22.388 "params": { 00:19:22.388 "name": "NVMe0", 00:19:22.388 "trtype": "tcp", 00:19:22.388 "traddr": "10.0.0.3", 00:19:22.388 "adrfam": "ipv4", 00:19:22.388 "trsvcid": "4420", 00:19:22.388 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:22.388 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:19:22.388 "hostaddr": "10.0.0.1", 00:19:22.388 "prchk_reftag": false, 00:19:22.388 "prchk_guard": false, 00:19:22.388 "hdgst": false, 00:19:22.388 "ddgst": false, 00:19:22.388 "allow_unrecognized_csi": false 00:19:22.388 } 00:19:22.388 } 00:19:22.388 Got JSON-RPC error response 00:19:22.388 GoRPCClient: error on JSON-RPC call 00:19:22.388 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:22.388 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:19:22.388 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:22.388 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:22.388 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:22.388 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:19:22.388 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:19:22.388 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:19:22.388 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:22.388 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:22.388 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:22.388 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:22.388 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:19:22.388 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.388 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:22.388 2024/11/20 11:32:07 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:19:22.388 request: 00:19:22.388 { 00:19:22.388 "method": "bdev_nvme_attach_controller", 00:19:22.388 "params": { 00:19:22.388 "name": "NVMe0", 00:19:22.388 "trtype": "tcp", 00:19:22.388 "traddr": "10.0.0.3", 00:19:22.388 "adrfam": "ipv4", 00:19:22.388 "trsvcid": "4420", 00:19:22.388 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:22.388 "hostaddr": "10.0.0.1", 00:19:22.388 "prchk_reftag": false, 00:19:22.388 "prchk_guard": false, 00:19:22.388 "hdgst": false, 00:19:22.389 "ddgst": false, 00:19:22.389 "allow_unrecognized_csi": false 00:19:22.389 } 00:19:22.389 } 00:19:22.389 Got JSON-RPC error response 00:19:22.389 GoRPCClient: error on JSON-RPC call 00:19:22.389 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:22.389 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:19:22.389 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:22.389 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:22.389 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:22.389 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:19:22.389 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:19:22.389 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:19:22.389 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:22.389 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:22.389 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:22.389 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:22.389 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:19:22.389 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.389 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:22.389 2024/11/20 11:32:07 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 multipath:disable name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:19:22.389 request: 00:19:22.389 { 00:19:22.389 "method": "bdev_nvme_attach_controller", 00:19:22.389 "params": { 00:19:22.389 "name": "NVMe0", 00:19:22.389 "trtype": "tcp", 00:19:22.389 "traddr": "10.0.0.3", 00:19:22.389 "adrfam": "ipv4", 00:19:22.389 "trsvcid": "4420", 00:19:22.389 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:22.389 "hostaddr": "10.0.0.1", 00:19:22.389 "prchk_reftag": false, 00:19:22.389 "prchk_guard": false, 00:19:22.389 "hdgst": false, 00:19:22.389 "ddgst": false, 00:19:22.389 "multipath": "disable", 00:19:22.389 "allow_unrecognized_csi": false 00:19:22.389 } 00:19:22.389 } 00:19:22.389 Got JSON-RPC error response 00:19:22.389 GoRPCClient: error on JSON-RPC call 00:19:22.389 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:22.389 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:19:22.389 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:22.389 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:22.389 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:22.389 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:19:22.389 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:19:22.389 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:19:22.389 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:22.389 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:22.389 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:22.389 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:22.389 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:19:22.389 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.389 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:22.389 2024/11/20 11:32:07 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 multipath:failover name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:19:22.389 request: 00:19:22.389 { 00:19:22.389 "method": "bdev_nvme_attach_controller", 00:19:22.389 "params": { 00:19:22.389 "name": "NVMe0", 00:19:22.389 "trtype": "tcp", 00:19:22.389 "traddr": "10.0.0.3", 00:19:22.389 "adrfam": "ipv4", 00:19:22.389 "trsvcid": "4420", 00:19:22.389 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:22.389 "hostaddr": "10.0.0.1", 00:19:22.389 "prchk_reftag": false, 00:19:22.389 "prchk_guard": false, 00:19:22.389 "hdgst": false, 00:19:22.389 "ddgst": false, 00:19:22.389 "multipath": "failover", 00:19:22.389 "allow_unrecognized_csi": false 00:19:22.389 } 00:19:22.389 } 00:19:22.389 Got JSON-RPC error response 00:19:22.389 GoRPCClient: error on JSON-RPC call 00:19:22.389 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:22.389 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:19:22.389 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:22.389 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:22.389 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:22.389 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:22.389 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.389 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:22.648 NVMe0n1 00:19:22.648 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.648 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:22.648 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.648 11:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:22.648 11:32:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.648 11:32:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:19:22.648 11:32:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.648 11:32:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:22.648 00:19:22.648 11:32:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.648 11:32:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:22.648 11:32:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.648 11:32:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:19:22.648 11:32:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:22.648 11:32:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.648 11:32:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:19:22.648 11:32:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:24.025 { 00:19:24.025 "results": [ 00:19:24.025 { 00:19:24.025 "job": "NVMe0n1", 00:19:24.025 "core_mask": "0x1", 00:19:24.025 "workload": "write", 00:19:24.025 "status": "finished", 00:19:24.025 "queue_depth": 128, 00:19:24.025 "io_size": 4096, 00:19:24.025 "runtime": 1.003897, 00:19:24.025 "iops": 18827.62873083593, 00:19:24.025 "mibps": 73.54542472982786, 00:19:24.025 "io_failed": 0, 00:19:24.025 "io_timeout": 0, 00:19:24.025 "avg_latency_us": 6787.493063859055, 00:19:24.025 "min_latency_us": 3649.163636363636, 00:19:24.025 "max_latency_us": 14537.076363636364 00:19:24.025 } 00:19:24.025 ], 00:19:24.025 "core_count": 1 00:19:24.025 } 00:19:24.025 11:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:19:24.025 11:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.025 11:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:24.025 11:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.025 11:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n 10.0.0.2 ]] 00:19:24.025 11:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme1 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:19:24.025 11:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.025 11:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:24.025 nvme1n1 00:19:24.025 11:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.025 11:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@106 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2016-06.io.spdk:cnode2 00:19:24.025 11:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@106 -- # jq -r '.[].peer_address.traddr' 00:19:24.025 11:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.025 11:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:24.025 11:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.025 11:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@106 -- # [[ 10.0.0.1 == \1\0\.\0\.\0\.\1 ]] 00:19:24.025 11:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller nvme1 00:19:24.025 11:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.025 11:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:24.025 11:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.025 11:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@109 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme1 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 00:19:24.025 11:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.025 11:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:24.025 nvme1n1 00:19:24.025 11:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.025 11:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@113 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2016-06.io.spdk:cnode2 00:19:24.025 11:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.025 11:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:24.025 11:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@113 -- # jq -r '.[].peer_address.traddr' 00:19:24.025 11:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.025 11:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@113 -- # [[ 10.0.0.2 == \1\0\.\0\.\0\.\2 ]] 00:19:24.025 11:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 86046 00:19:24.025 11:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 86046 ']' 00:19:24.025 11:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 86046 00:19:24.025 11:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:19:24.025 11:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:24.025 11:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86046 00:19:24.284 killing process with pid 86046 00:19:24.284 11:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:24.284 11:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:24.284 11:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86046' 00:19:24.284 11:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 86046 00:19:24.284 11:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 86046 00:19:24.284 11:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:24.284 11:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.284 11:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:24.284 11:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.284 11:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:24.284 11:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.284 11:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:24.284 11:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.284 11:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:19:24.284 11:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:24.284 11:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:19:24.284 11:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:19:24.284 11:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:19:24.284 11:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:19:24.284 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:19:24.284 [2024-11-20 11:32:07.512981] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:19:24.284 [2024-11-20 11:32:07.513634] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86046 ] 00:19:24.284 [2024-11-20 11:32:07.661802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.284 [2024-11-20 11:32:07.694889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:24.284 [2024-11-20 11:32:08.076788] bdev.c:4696:bdev_name_add: *ERROR*: Bdev name 646e40d2-7674-4ac9-bc85-8bfc6e4285af already exists 00:19:24.284 [2024-11-20 11:32:08.076863] bdev.c:7832:bdev_register: *ERROR*: Unable to add uuid:646e40d2-7674-4ac9-bc85-8bfc6e4285af alias for bdev NVMe1n1 00:19:24.284 [2024-11-20 11:32:08.076883] bdev_nvme.c:4659:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:19:24.284 Running I/O for 1 seconds... 00:19:24.284 18773.00 IOPS, 73.33 MiB/s 00:19:24.284 Latency(us) 00:19:24.284 [2024-11-20T11:32:09.873Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:24.284 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:19:24.284 NVMe0n1 : 1.00 18827.63 73.55 0.00 0.00 6787.49 3649.16 14537.08 00:19:24.284 [2024-11-20T11:32:09.873Z] =================================================================================================================== 00:19:24.284 [2024-11-20T11:32:09.873Z] Total : 18827.63 73.55 0.00 0.00 6787.49 3649.16 14537.08 00:19:24.284 Received shutdown signal, test time was about 1.000000 seconds 00:19:24.284 00:19:24.284 Latency(us) 00:19:24.284 [2024-11-20T11:32:09.873Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:24.284 [2024-11-20T11:32:09.873Z] =================================================================================================================== 00:19:24.284 [2024-11-20T11:32:09.873Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:24.284 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:19:24.284 11:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:24.284 11:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:19:24.284 11:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:19:24.284 11:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:24.284 11:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:19:24.284 11:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:24.284 11:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:19:24.284 11:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:24.284 11:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:24.284 rmmod nvme_tcp 00:19:24.284 rmmod nvme_fabrics 00:19:24.285 rmmod nvme_keyring 00:19:24.285 11:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:24.285 11:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:19:24.285 11:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:19:24.285 11:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 86002 ']' 00:19:24.285 11:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 86002 00:19:24.285 11:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 86002 ']' 00:19:24.285 11:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 86002 00:19:24.285 11:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:19:24.544 11:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:24.544 11:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86002 00:19:24.544 killing process with pid 86002 00:19:24.544 11:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:24.544 11:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:24.544 11:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86002' 00:19:24.544 11:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 86002 00:19:24.544 11:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 86002 00:19:24.544 11:32:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:24.544 11:32:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:24.544 11:32:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:24.544 11:32:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:19:24.544 11:32:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:19:24.544 11:32:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:24.544 11:32:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:19:24.544 11:32:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:24.544 11:32:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:24.544 11:32:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:24.544 11:32:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:24.544 11:32:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:24.544 11:32:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:24.803 11:32:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:24.803 11:32:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:24.803 11:32:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:24.803 11:32:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:24.803 11:32:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:24.803 11:32:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:24.803 11:32:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:24.803 11:32:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:24.803 11:32:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:24.803 11:32:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:24.803 11:32:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:24.803 11:32:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:24.803 11:32:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:24.803 11:32:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@300 -- # return 0 00:19:24.803 00:19:24.803 real 0m4.062s 00:19:24.803 user 0m11.418s 00:19:24.803 sys 0m1.040s 00:19:24.803 11:32:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:24.803 11:32:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:24.803 ************************************ 00:19:24.803 END TEST nvmf_multicontroller 00:19:24.803 ************************************ 00:19:24.803 11:32:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:19:24.803 11:32:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:24.803 11:32:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:24.803 11:32:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:24.803 ************************************ 00:19:24.803 START TEST nvmf_aer 00:19:24.803 ************************************ 00:19:24.803 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:19:25.132 * Looking for test storage... 00:19:25.132 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:25.132 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:25.132 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:19:25.132 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:25.132 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:25.132 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:25.132 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:25.132 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:25.132 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:19:25.132 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:19:25.132 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:19:25.132 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:19:25.132 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:19:25.132 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:19:25.132 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:19:25.132 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:25.132 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:19:25.132 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:19:25.132 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:25.132 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:25.132 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:19:25.132 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:19:25.132 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:25.132 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:19:25.132 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:19:25.132 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:19:25.132 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:19:25.132 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:25.132 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:19:25.132 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:19:25.132 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:25.132 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:25.132 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:19:25.132 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:25.132 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:25.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:25.133 --rc genhtml_branch_coverage=1 00:19:25.133 --rc genhtml_function_coverage=1 00:19:25.133 --rc genhtml_legend=1 00:19:25.133 --rc geninfo_all_blocks=1 00:19:25.133 --rc geninfo_unexecuted_blocks=1 00:19:25.133 00:19:25.133 ' 00:19:25.133 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:25.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:25.133 --rc genhtml_branch_coverage=1 00:19:25.133 --rc genhtml_function_coverage=1 00:19:25.133 --rc genhtml_legend=1 00:19:25.133 --rc geninfo_all_blocks=1 00:19:25.133 --rc geninfo_unexecuted_blocks=1 00:19:25.133 00:19:25.133 ' 00:19:25.133 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:25.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:25.133 --rc genhtml_branch_coverage=1 00:19:25.133 --rc genhtml_function_coverage=1 00:19:25.133 --rc genhtml_legend=1 00:19:25.133 --rc geninfo_all_blocks=1 00:19:25.133 --rc geninfo_unexecuted_blocks=1 00:19:25.133 00:19:25.133 ' 00:19:25.133 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:25.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:25.133 --rc genhtml_branch_coverage=1 00:19:25.133 --rc genhtml_function_coverage=1 00:19:25.133 --rc genhtml_legend=1 00:19:25.133 --rc geninfo_all_blocks=1 00:19:25.133 --rc geninfo_unexecuted_blocks=1 00:19:25.133 00:19:25.133 ' 00:19:25.133 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:25.133 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:19:25.133 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:25.133 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:25.133 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:25.133 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:25.133 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:25.133 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:25.133 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:25.133 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:25.133 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:25.133 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:25.133 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:19:25.133 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=806d442c-08ba-4719-b76d-33169283459a 00:19:25.133 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:25.133 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:25.133 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:25.133 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:25.133 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:25.133 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:19:25.133 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:25.133 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:25.133 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:25.133 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.133 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.133 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.133 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:19:25.133 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.133 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:19:25.133 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:25.133 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:25.133 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:25.133 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:25.133 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:25.133 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:25.133 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:25.133 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:25.133 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:25.133 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:25.133 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:19:25.133 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:25.133 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:25.133 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:25.133 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:25.133 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:25.133 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:25.133 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:25.133 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:25.133 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:25.133 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:25.133 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:25.133 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:25.133 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:25.133 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:25.133 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:25.133 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:25.133 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:25.133 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:25.133 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:25.133 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:25.133 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:25.133 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:25.133 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:25.133 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:25.133 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:25.133 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:25.133 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:25.133 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:25.133 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:25.133 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:25.133 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:25.133 Cannot find device "nvmf_init_br" 00:19:25.133 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@162 -- # true 00:19:25.133 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:25.133 Cannot find device "nvmf_init_br2" 00:19:25.133 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@163 -- # true 00:19:25.133 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:25.133 Cannot find device "nvmf_tgt_br" 00:19:25.133 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@164 -- # true 00:19:25.133 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:25.133 Cannot find device "nvmf_tgt_br2" 00:19:25.133 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@165 -- # true 00:19:25.133 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:25.133 Cannot find device "nvmf_init_br" 00:19:25.133 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@166 -- # true 00:19:25.133 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:25.133 Cannot find device "nvmf_init_br2" 00:19:25.134 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@167 -- # true 00:19:25.134 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:25.134 Cannot find device "nvmf_tgt_br" 00:19:25.134 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@168 -- # true 00:19:25.134 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:25.134 Cannot find device "nvmf_tgt_br2" 00:19:25.134 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@169 -- # true 00:19:25.134 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:25.134 Cannot find device "nvmf_br" 00:19:25.134 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@170 -- # true 00:19:25.134 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:25.134 Cannot find device "nvmf_init_if" 00:19:25.134 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@171 -- # true 00:19:25.134 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:25.134 Cannot find device "nvmf_init_if2" 00:19:25.134 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@172 -- # true 00:19:25.134 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:25.134 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:25.134 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@173 -- # true 00:19:25.134 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:25.134 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:25.406 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@174 -- # true 00:19:25.406 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:25.406 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:25.406 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:25.406 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:25.406 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:25.406 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:25.406 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:25.406 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:25.406 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:25.406 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:25.406 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:25.406 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:25.406 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:25.406 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:25.406 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:25.406 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:25.406 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:25.406 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:25.406 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:25.406 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:25.406 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:25.406 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:25.406 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:25.406 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:25.406 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:25.406 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:25.406 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:25.406 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:25.406 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:25.406 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:25.406 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:25.406 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:25.406 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:25.406 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:25.406 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:19:25.406 00:19:25.406 --- 10.0.0.3 ping statistics --- 00:19:25.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:25.406 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:19:25.406 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:25.406 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:25.406 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:19:25.406 00:19:25.406 --- 10.0.0.4 ping statistics --- 00:19:25.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:25.406 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:19:25.406 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:25.406 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:25.406 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:19:25.406 00:19:25.406 --- 10.0.0.1 ping statistics --- 00:19:25.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:25.406 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:19:25.406 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:25.406 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:25.406 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:19:25.406 00:19:25.406 --- 10.0.0.2 ping statistics --- 00:19:25.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:25.406 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:19:25.406 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:25.406 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@461 -- # return 0 00:19:25.406 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:25.406 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:25.407 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:25.407 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:25.407 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:25.407 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:25.407 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:25.407 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:19:25.407 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:25.407 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:25.407 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:25.666 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=86319 00:19:25.666 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 86319 00:19:25.666 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:25.666 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 86319 ']' 00:19:25.666 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:25.666 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:25.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:25.666 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:25.666 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:25.666 11:32:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:25.666 [2024-11-20 11:32:11.045696] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:19:25.666 [2024-11-20 11:32:11.045779] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:25.666 [2024-11-20 11:32:11.192516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:25.666 [2024-11-20 11:32:11.226588] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:25.666 [2024-11-20 11:32:11.226658] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:25.666 [2024-11-20 11:32:11.226670] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:25.666 [2024-11-20 11:32:11.226679] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:25.666 [2024-11-20 11:32:11.226686] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:25.666 [2024-11-20 11:32:11.227512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:25.666 [2024-11-20 11:32:11.227608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:25.666 [2024-11-20 11:32:11.227710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:25.666 [2024-11-20 11:32:11.227712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:25.925 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:25.925 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:19:25.925 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:25.925 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:25.925 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:25.925 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:25.925 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:25.925 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.925 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:25.925 [2024-11-20 11:32:11.359663] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:25.925 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.925 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:19:25.925 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.925 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:25.925 Malloc0 00:19:25.925 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.925 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:19:25.925 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.926 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:25.926 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.926 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:25.926 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.926 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:25.926 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.926 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:25.926 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.926 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:25.926 [2024-11-20 11:32:11.409585] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:25.926 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.926 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:19:25.926 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.926 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:25.926 [ 00:19:25.926 { 00:19:25.926 "allow_any_host": true, 00:19:25.926 "hosts": [], 00:19:25.926 "listen_addresses": [], 00:19:25.926 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:25.926 "subtype": "Discovery" 00:19:25.926 }, 00:19:25.926 { 00:19:25.926 "allow_any_host": true, 00:19:25.926 "hosts": [], 00:19:25.926 "listen_addresses": [ 00:19:25.926 { 00:19:25.926 "adrfam": "IPv4", 00:19:25.926 "traddr": "10.0.0.3", 00:19:25.926 "trsvcid": "4420", 00:19:25.926 "trtype": "TCP" 00:19:25.926 } 00:19:25.926 ], 00:19:25.926 "max_cntlid": 65519, 00:19:25.926 "max_namespaces": 2, 00:19:25.926 "min_cntlid": 1, 00:19:25.926 "model_number": "SPDK bdev Controller", 00:19:25.926 "namespaces": [ 00:19:25.926 { 00:19:25.926 "bdev_name": "Malloc0", 00:19:25.926 "name": "Malloc0", 00:19:25.926 "nguid": "D44E2E6ECE3A46398C1448CE89CBB0A3", 00:19:25.926 "nsid": 1, 00:19:25.926 "uuid": "d44e2e6e-ce3a-4639-8c14-48ce89cbb0a3" 00:19:25.926 } 00:19:25.926 ], 00:19:25.926 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:25.926 "serial_number": "SPDK00000000000001", 00:19:25.926 "subtype": "NVMe" 00:19:25.926 } 00:19:25.926 ] 00:19:25.926 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.926 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:19:25.926 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:19:25.926 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=86354 00:19:25.926 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:19:25.926 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:19:25.926 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:19:25.926 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:25.926 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:19:25.926 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:19:25.926 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:19:26.185 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:26.185 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:19:26.185 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:19:26.185 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:19:26.185 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:26.185 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:26.185 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:19:26.185 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:19:26.185 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.185 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:26.185 Malloc1 00:19:26.185 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.185 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:19:26.185 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.185 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:26.185 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.185 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:19:26.185 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.185 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:26.185 [ 00:19:26.185 { 00:19:26.185 "allow_any_host": true, 00:19:26.185 "hosts": [], 00:19:26.185 "listen_addresses": [], 00:19:26.185 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:26.185 "subtype": "Discovery" 00:19:26.185 }, 00:19:26.185 { 00:19:26.185 "allow_any_host": true, 00:19:26.185 "hosts": [], 00:19:26.185 "listen_addresses": [ 00:19:26.185 { 00:19:26.185 "adrfam": "IPv4", 00:19:26.185 "traddr": "10.0.0.3", 00:19:26.185 "trsvcid": "4420", 00:19:26.185 "trtype": "TCP" 00:19:26.185 } 00:19:26.185 ], 00:19:26.185 "max_cntlid": 65519, 00:19:26.185 "max_namespaces": 2, 00:19:26.185 "min_cntlid": 1, 00:19:26.185 "model_number": "SPDK bdev Controller", 00:19:26.185 "namespaces": [ 00:19:26.185 { 00:19:26.185 "bdev_name": "Malloc0", 00:19:26.185 "name": "Malloc0", 00:19:26.185 "nguid": "D44E2E6ECE3A46398C1448CE89CBB0A3", 00:19:26.185 "nsid": 1, 00:19:26.185 "uuid": "d44e2e6e-ce3a-4639-8c14-48ce89cbb0a3" 00:19:26.185 }, 00:19:26.185 Asynchronous Event Request test 00:19:26.185 Attaching to 10.0.0.3 00:19:26.185 Attached to 10.0.0.3 00:19:26.185 Registering asynchronous event callbacks... 00:19:26.185 Starting namespace attribute notice tests for all controllers... 00:19:26.185 10.0.0.3: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:19:26.185 aer_cb - Changed Namespace 00:19:26.185 Cleaning up... 00:19:26.185 { 00:19:26.185 "bdev_name": "Malloc1", 00:19:26.185 "name": "Malloc1", 00:19:26.185 "nguid": "CF1D5C7D8F594EDCAE787224C69B5750", 00:19:26.185 "nsid": 2, 00:19:26.185 "uuid": "cf1d5c7d-8f59-4edc-ae78-7224c69b5750" 00:19:26.185 } 00:19:26.185 ], 00:19:26.185 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:26.185 "serial_number": "SPDK00000000000001", 00:19:26.185 "subtype": "NVMe" 00:19:26.185 } 00:19:26.185 ] 00:19:26.185 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.185 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 86354 00:19:26.185 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:19:26.185 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.185 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:26.185 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.185 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:19:26.185 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.185 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:26.185 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.185 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:26.185 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.185 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:26.185 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.185 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:19:26.185 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:19:26.185 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:26.185 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:19:26.443 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:26.443 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:19:26.443 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:26.443 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:26.443 rmmod nvme_tcp 00:19:26.443 rmmod nvme_fabrics 00:19:26.443 rmmod nvme_keyring 00:19:26.443 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:26.443 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:19:26.443 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:19:26.443 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 86319 ']' 00:19:26.443 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 86319 00:19:26.443 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 86319 ']' 00:19:26.443 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 86319 00:19:26.443 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:19:26.443 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:26.443 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86319 00:19:26.443 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:26.443 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:26.443 killing process with pid 86319 00:19:26.443 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86319' 00:19:26.443 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 86319 00:19:26.443 11:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 86319 00:19:26.443 11:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:26.443 11:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:26.443 11:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:26.443 11:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:19:26.443 11:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:19:26.443 11:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:26.443 11:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:19:26.443 11:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:26.443 11:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:26.443 11:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:26.701 11:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:26.701 11:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:26.701 11:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:26.701 11:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:26.701 11:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:26.701 11:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:26.701 11:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:26.701 11:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:26.701 11:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:26.701 11:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:26.701 11:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:26.701 11:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:26.701 11:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:26.701 11:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:26.701 11:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:26.701 11:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:26.701 11:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@300 -- # return 0 00:19:26.701 00:19:26.701 real 0m1.901s 00:19:26.701 user 0m3.406s 00:19:26.701 sys 0m0.655s 00:19:26.701 ************************************ 00:19:26.701 END TEST nvmf_aer 00:19:26.701 ************************************ 00:19:26.701 11:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:26.701 11:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:26.960 11:32:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:19:26.960 11:32:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:26.960 11:32:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:26.960 11:32:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.960 ************************************ 00:19:26.960 START TEST nvmf_async_init 00:19:26.960 ************************************ 00:19:26.961 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:19:26.961 * Looking for test storage... 00:19:26.961 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:26.961 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:26.961 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:26.961 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:19:26.961 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:26.961 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:26.961 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:26.961 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:26.961 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:19:26.961 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:19:26.961 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:19:26.961 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:19:26.961 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:19:26.961 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:19:26.961 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:19:26.961 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:26.961 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:19:26.961 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:19:26.961 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:26.961 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:26.961 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:19:26.961 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:19:26.961 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:26.961 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:19:26.961 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:19:26.961 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:19:26.961 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:19:26.961 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:26.961 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:19:26.961 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:19:26.961 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:26.961 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:26.961 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:19:26.961 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:26.961 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:26.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.961 --rc genhtml_branch_coverage=1 00:19:26.961 --rc genhtml_function_coverage=1 00:19:26.961 --rc genhtml_legend=1 00:19:26.961 --rc geninfo_all_blocks=1 00:19:26.961 --rc geninfo_unexecuted_blocks=1 00:19:26.961 00:19:26.961 ' 00:19:26.961 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:26.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.961 --rc genhtml_branch_coverage=1 00:19:26.961 --rc genhtml_function_coverage=1 00:19:26.961 --rc genhtml_legend=1 00:19:26.961 --rc geninfo_all_blocks=1 00:19:26.961 --rc geninfo_unexecuted_blocks=1 00:19:26.961 00:19:26.961 ' 00:19:26.961 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:26.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.961 --rc genhtml_branch_coverage=1 00:19:26.961 --rc genhtml_function_coverage=1 00:19:26.961 --rc genhtml_legend=1 00:19:26.961 --rc geninfo_all_blocks=1 00:19:26.961 --rc geninfo_unexecuted_blocks=1 00:19:26.961 00:19:26.961 ' 00:19:26.961 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:26.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.961 --rc genhtml_branch_coverage=1 00:19:26.961 --rc genhtml_function_coverage=1 00:19:26.961 --rc genhtml_legend=1 00:19:26.961 --rc geninfo_all_blocks=1 00:19:26.961 --rc geninfo_unexecuted_blocks=1 00:19:26.961 00:19:26.961 ' 00:19:26.961 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:26.961 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:19:26.961 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:26.961 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:26.961 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:26.961 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:26.961 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:26.961 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:26.961 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:26.961 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:26.961 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:26.961 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:26.961 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:19:26.961 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=806d442c-08ba-4719-b76d-33169283459a 00:19:26.961 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:26.961 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:26.961 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:26.961 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:26.961 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:26.961 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:19:26.961 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:26.961 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:26.961 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:26.961 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.961 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.961 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.961 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:19:26.961 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.961 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:19:26.961 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:26.961 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:26.961 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:26.961 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:26.961 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:26.961 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:26.961 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:26.961 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:26.961 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:26.961 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:26.961 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:19:26.961 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:19:26.961 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:19:26.962 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:19:26.962 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:19:26.962 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:19:27.220 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=dda4bb84274247a282b00ea0ff56d18c 00:19:27.220 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:19:27.220 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:27.220 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:27.220 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:27.220 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:27.220 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:27.220 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:27.220 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:27.220 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:27.220 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:27.220 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:27.220 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:27.220 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:27.220 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:27.220 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:27.220 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:27.220 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:27.220 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:27.220 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:27.220 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:27.220 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:27.220 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:27.220 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:27.220 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:27.220 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:27.220 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:27.220 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:27.220 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:27.220 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:27.220 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:27.220 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:27.220 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:27.220 Cannot find device "nvmf_init_br" 00:19:27.220 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@162 -- # true 00:19:27.220 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:27.220 Cannot find device "nvmf_init_br2" 00:19:27.220 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@163 -- # true 00:19:27.220 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:27.220 Cannot find device "nvmf_tgt_br" 00:19:27.220 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@164 -- # true 00:19:27.220 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:27.220 Cannot find device "nvmf_tgt_br2" 00:19:27.220 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@165 -- # true 00:19:27.220 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:27.220 Cannot find device "nvmf_init_br" 00:19:27.220 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@166 -- # true 00:19:27.220 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:27.220 Cannot find device "nvmf_init_br2" 00:19:27.220 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@167 -- # true 00:19:27.220 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:27.220 Cannot find device "nvmf_tgt_br" 00:19:27.220 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@168 -- # true 00:19:27.220 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:27.220 Cannot find device "nvmf_tgt_br2" 00:19:27.220 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@169 -- # true 00:19:27.220 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:27.220 Cannot find device "nvmf_br" 00:19:27.220 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@170 -- # true 00:19:27.220 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:27.220 Cannot find device "nvmf_init_if" 00:19:27.220 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@171 -- # true 00:19:27.220 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:27.220 Cannot find device "nvmf_init_if2" 00:19:27.220 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@172 -- # true 00:19:27.220 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:27.220 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:27.220 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@173 -- # true 00:19:27.220 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:27.220 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:27.220 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@174 -- # true 00:19:27.220 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:27.220 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:27.220 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:27.220 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:27.220 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:27.220 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:27.220 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:27.220 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:27.220 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:27.220 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:27.480 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:27.480 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:27.480 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:27.480 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:27.480 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:27.480 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:27.480 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:27.480 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:27.480 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:27.480 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:27.480 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:27.480 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:27.480 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:27.480 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:27.480 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:27.480 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:27.480 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:27.480 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:27.480 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:27.480 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:27.480 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:27.480 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:27.480 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:27.480 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:27.480 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.086 ms 00:19:27.480 00:19:27.480 --- 10.0.0.3 ping statistics --- 00:19:27.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:27.480 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:19:27.480 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:27.480 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:27.480 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.069 ms 00:19:27.480 00:19:27.480 --- 10.0.0.4 ping statistics --- 00:19:27.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:27.480 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:19:27.480 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:27.480 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:27.480 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:19:27.480 00:19:27.480 --- 10.0.0.1 ping statistics --- 00:19:27.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:27.480 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:19:27.480 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:27.480 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:27.480 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:19:27.480 00:19:27.480 --- 10.0.0.2 ping statistics --- 00:19:27.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:27.480 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:19:27.480 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:27.480 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@461 -- # return 0 00:19:27.480 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:27.480 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:27.480 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:27.480 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:27.480 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:27.480 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:27.480 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:27.480 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:19:27.480 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:27.480 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:27.480 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:27.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:27.480 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=86578 00:19:27.480 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 86578 00:19:27.480 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 86578 ']' 00:19:27.480 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:27.480 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:27.480 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:27.480 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:19:27.480 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:27.480 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:27.480 [2024-11-20 11:32:13.052685] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:19:27.480 [2024-11-20 11:32:13.052777] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:27.739 [2024-11-20 11:32:13.202286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:27.739 [2024-11-20 11:32:13.239974] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:27.739 [2024-11-20 11:32:13.240051] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:27.739 [2024-11-20 11:32:13.240064] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:27.739 [2024-11-20 11:32:13.240073] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:27.739 [2024-11-20 11:32:13.240082] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:27.739 [2024-11-20 11:32:13.240424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:27.999 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:27.999 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:19:27.999 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:27.999 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:27.999 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:27.999 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:27.999 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:19:27.999 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.999 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:27.999 [2024-11-20 11:32:13.375536] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:27.999 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.999 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:19:27.999 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.999 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:27.999 null0 00:19:27.999 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.999 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:19:27.999 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.999 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:27.999 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.999 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:19:27.999 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.999 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:27.999 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.999 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g dda4bb84274247a282b00ea0ff56d18c 00:19:27.999 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.999 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:27.999 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.999 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:19:27.999 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.999 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:27.999 [2024-11-20 11:32:13.415645] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:27.999 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.999 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:19:27.999 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.999 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:28.259 nvme0n1 00:19:28.259 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.259 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:19:28.259 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.259 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:28.259 [ 00:19:28.259 { 00:19:28.259 "aliases": [ 00:19:28.259 "dda4bb84-2742-47a2-82b0-0ea0ff56d18c" 00:19:28.259 ], 00:19:28.259 "assigned_rate_limits": { 00:19:28.259 "r_mbytes_per_sec": 0, 00:19:28.259 "rw_ios_per_sec": 0, 00:19:28.259 "rw_mbytes_per_sec": 0, 00:19:28.259 "w_mbytes_per_sec": 0 00:19:28.259 }, 00:19:28.259 "block_size": 512, 00:19:28.259 "claimed": false, 00:19:28.259 "driver_specific": { 00:19:28.259 "mp_policy": "active_passive", 00:19:28.259 "nvme": [ 00:19:28.259 { 00:19:28.259 "ctrlr_data": { 00:19:28.259 "ana_reporting": false, 00:19:28.259 "cntlid": 1, 00:19:28.259 "firmware_revision": "25.01", 00:19:28.259 "model_number": "SPDK bdev Controller", 00:19:28.259 "multi_ctrlr": true, 00:19:28.259 "oacs": { 00:19:28.259 "firmware": 0, 00:19:28.259 "format": 0, 00:19:28.259 "ns_manage": 0, 00:19:28.259 "security": 0 00:19:28.259 }, 00:19:28.259 "serial_number": "00000000000000000000", 00:19:28.259 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:28.259 "vendor_id": "0x8086" 00:19:28.259 }, 00:19:28.259 "ns_data": { 00:19:28.259 "can_share": true, 00:19:28.259 "id": 1 00:19:28.259 }, 00:19:28.259 "trid": { 00:19:28.259 "adrfam": "IPv4", 00:19:28.259 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:28.259 "traddr": "10.0.0.3", 00:19:28.259 "trsvcid": "4420", 00:19:28.259 "trtype": "TCP" 00:19:28.259 }, 00:19:28.259 "vs": { 00:19:28.259 "nvme_version": "1.3" 00:19:28.259 } 00:19:28.259 } 00:19:28.259 ] 00:19:28.259 }, 00:19:28.259 "memory_domains": [ 00:19:28.259 { 00:19:28.259 "dma_device_id": "system", 00:19:28.259 "dma_device_type": 1 00:19:28.259 } 00:19:28.259 ], 00:19:28.259 "name": "nvme0n1", 00:19:28.259 "num_blocks": 2097152, 00:19:28.259 "numa_id": -1, 00:19:28.259 "product_name": "NVMe disk", 00:19:28.259 "supported_io_types": { 00:19:28.259 "abort": true, 00:19:28.259 "compare": true, 00:19:28.259 "compare_and_write": true, 00:19:28.259 "copy": true, 00:19:28.259 "flush": true, 00:19:28.259 "get_zone_info": false, 00:19:28.259 "nvme_admin": true, 00:19:28.259 "nvme_io": true, 00:19:28.259 "nvme_io_md": false, 00:19:28.259 "nvme_iov_md": false, 00:19:28.259 "read": true, 00:19:28.259 "reset": true, 00:19:28.259 "seek_data": false, 00:19:28.259 "seek_hole": false, 00:19:28.259 "unmap": false, 00:19:28.259 "write": true, 00:19:28.259 "write_zeroes": true, 00:19:28.259 "zcopy": false, 00:19:28.259 "zone_append": false, 00:19:28.259 "zone_management": false 00:19:28.259 }, 00:19:28.259 "uuid": "dda4bb84-2742-47a2-82b0-0ea0ff56d18c", 00:19:28.259 "zoned": false 00:19:28.259 } 00:19:28.259 ] 00:19:28.259 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.259 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:19:28.259 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.259 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:28.259 [2024-11-20 11:32:13.680636] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:19:28.259 [2024-11-20 11:32:13.680733] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21b2080 (9): Bad file descriptor 00:19:28.259 [2024-11-20 11:32:13.812990] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:19:28.259 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.259 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:19:28.259 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.259 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:28.259 [ 00:19:28.259 { 00:19:28.259 "aliases": [ 00:19:28.259 "dda4bb84-2742-47a2-82b0-0ea0ff56d18c" 00:19:28.259 ], 00:19:28.259 "assigned_rate_limits": { 00:19:28.259 "r_mbytes_per_sec": 0, 00:19:28.259 "rw_ios_per_sec": 0, 00:19:28.259 "rw_mbytes_per_sec": 0, 00:19:28.259 "w_mbytes_per_sec": 0 00:19:28.259 }, 00:19:28.259 "block_size": 512, 00:19:28.259 "claimed": false, 00:19:28.259 "driver_specific": { 00:19:28.259 "mp_policy": "active_passive", 00:19:28.259 "nvme": [ 00:19:28.259 { 00:19:28.259 "ctrlr_data": { 00:19:28.259 "ana_reporting": false, 00:19:28.259 "cntlid": 2, 00:19:28.259 "firmware_revision": "25.01", 00:19:28.259 "model_number": "SPDK bdev Controller", 00:19:28.259 "multi_ctrlr": true, 00:19:28.259 "oacs": { 00:19:28.259 "firmware": 0, 00:19:28.259 "format": 0, 00:19:28.259 "ns_manage": 0, 00:19:28.259 "security": 0 00:19:28.259 }, 00:19:28.259 "serial_number": "00000000000000000000", 00:19:28.259 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:28.259 "vendor_id": "0x8086" 00:19:28.259 }, 00:19:28.259 "ns_data": { 00:19:28.259 "can_share": true, 00:19:28.259 "id": 1 00:19:28.259 }, 00:19:28.259 "trid": { 00:19:28.259 "adrfam": "IPv4", 00:19:28.259 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:28.259 "traddr": "10.0.0.3", 00:19:28.259 "trsvcid": "4420", 00:19:28.259 "trtype": "TCP" 00:19:28.259 }, 00:19:28.259 "vs": { 00:19:28.259 "nvme_version": "1.3" 00:19:28.259 } 00:19:28.259 } 00:19:28.259 ] 00:19:28.259 }, 00:19:28.259 "memory_domains": [ 00:19:28.259 { 00:19:28.259 "dma_device_id": "system", 00:19:28.259 "dma_device_type": 1 00:19:28.259 } 00:19:28.259 ], 00:19:28.259 "name": "nvme0n1", 00:19:28.259 "num_blocks": 2097152, 00:19:28.259 "numa_id": -1, 00:19:28.259 "product_name": "NVMe disk", 00:19:28.259 "supported_io_types": { 00:19:28.259 "abort": true, 00:19:28.259 "compare": true, 00:19:28.259 "compare_and_write": true, 00:19:28.259 "copy": true, 00:19:28.259 "flush": true, 00:19:28.259 "get_zone_info": false, 00:19:28.259 "nvme_admin": true, 00:19:28.259 "nvme_io": true, 00:19:28.259 "nvme_io_md": false, 00:19:28.259 "nvme_iov_md": false, 00:19:28.259 "read": true, 00:19:28.259 "reset": true, 00:19:28.259 "seek_data": false, 00:19:28.259 "seek_hole": false, 00:19:28.259 "unmap": false, 00:19:28.259 "write": true, 00:19:28.259 "write_zeroes": true, 00:19:28.259 "zcopy": false, 00:19:28.259 "zone_append": false, 00:19:28.259 "zone_management": false 00:19:28.259 }, 00:19:28.259 "uuid": "dda4bb84-2742-47a2-82b0-0ea0ff56d18c", 00:19:28.259 "zoned": false 00:19:28.259 } 00:19:28.259 ] 00:19:28.259 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.259 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:28.259 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.259 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:28.518 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.518 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:19:28.518 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.0OOqAlfk4k 00:19:28.518 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:28.518 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.0OOqAlfk4k 00:19:28.518 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.0OOqAlfk4k 00:19:28.518 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.518 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:28.518 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.518 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:19:28.518 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.518 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:28.518 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.518 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 --secure-channel 00:19:28.519 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.519 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:28.519 [2024-11-20 11:32:13.888822] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:28.519 [2024-11-20 11:32:13.889031] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:28.519 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.519 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:19:28.519 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.519 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:28.519 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.519 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:28.519 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.519 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:28.519 [2024-11-20 11:32:13.904883] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:28.519 nvme0n1 00:19:28.519 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.519 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:19:28.519 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.519 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:28.519 [ 00:19:28.519 { 00:19:28.519 "aliases": [ 00:19:28.519 "dda4bb84-2742-47a2-82b0-0ea0ff56d18c" 00:19:28.519 ], 00:19:28.519 "assigned_rate_limits": { 00:19:28.519 "r_mbytes_per_sec": 0, 00:19:28.519 "rw_ios_per_sec": 0, 00:19:28.519 "rw_mbytes_per_sec": 0, 00:19:28.519 "w_mbytes_per_sec": 0 00:19:28.519 }, 00:19:28.519 "block_size": 512, 00:19:28.519 "claimed": false, 00:19:28.519 "driver_specific": { 00:19:28.519 "mp_policy": "active_passive", 00:19:28.519 "nvme": [ 00:19:28.519 { 00:19:28.519 "ctrlr_data": { 00:19:28.519 "ana_reporting": false, 00:19:28.519 "cntlid": 3, 00:19:28.519 "firmware_revision": "25.01", 00:19:28.519 "model_number": "SPDK bdev Controller", 00:19:28.519 "multi_ctrlr": true, 00:19:28.519 "oacs": { 00:19:28.519 "firmware": 0, 00:19:28.519 "format": 0, 00:19:28.519 "ns_manage": 0, 00:19:28.519 "security": 0 00:19:28.519 }, 00:19:28.519 "serial_number": "00000000000000000000", 00:19:28.519 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:28.519 "vendor_id": "0x8086" 00:19:28.519 }, 00:19:28.519 "ns_data": { 00:19:28.519 "can_share": true, 00:19:28.519 "id": 1 00:19:28.519 }, 00:19:28.519 "trid": { 00:19:28.519 "adrfam": "IPv4", 00:19:28.519 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:28.519 "traddr": "10.0.0.3", 00:19:28.519 "trsvcid": "4421", 00:19:28.519 "trtype": "TCP" 00:19:28.519 }, 00:19:28.519 "vs": { 00:19:28.519 "nvme_version": "1.3" 00:19:28.519 } 00:19:28.519 } 00:19:28.519 ] 00:19:28.519 }, 00:19:28.519 "memory_domains": [ 00:19:28.519 { 00:19:28.519 "dma_device_id": "system", 00:19:28.519 "dma_device_type": 1 00:19:28.519 } 00:19:28.519 ], 00:19:28.519 "name": "nvme0n1", 00:19:28.519 "num_blocks": 2097152, 00:19:28.519 "numa_id": -1, 00:19:28.519 "product_name": "NVMe disk", 00:19:28.519 "supported_io_types": { 00:19:28.519 "abort": true, 00:19:28.519 "compare": true, 00:19:28.519 "compare_and_write": true, 00:19:28.519 "copy": true, 00:19:28.519 "flush": true, 00:19:28.519 "get_zone_info": false, 00:19:28.519 "nvme_admin": true, 00:19:28.519 "nvme_io": true, 00:19:28.519 "nvme_io_md": false, 00:19:28.519 "nvme_iov_md": false, 00:19:28.519 "read": true, 00:19:28.519 "reset": true, 00:19:28.519 "seek_data": false, 00:19:28.519 "seek_hole": false, 00:19:28.519 "unmap": false, 00:19:28.519 "write": true, 00:19:28.519 "write_zeroes": true, 00:19:28.519 "zcopy": false, 00:19:28.519 "zone_append": false, 00:19:28.519 "zone_management": false 00:19:28.519 }, 00:19:28.519 "uuid": "dda4bb84-2742-47a2-82b0-0ea0ff56d18c", 00:19:28.519 "zoned": false 00:19:28.519 } 00:19:28.519 ] 00:19:28.519 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.519 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:28.519 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.519 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:28.519 11:32:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.519 11:32:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.0OOqAlfk4k 00:19:28.519 11:32:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:19:28.519 11:32:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:19:28.519 11:32:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:28.519 11:32:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:19:28.519 11:32:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:28.519 11:32:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:19:28.519 11:32:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:28.519 11:32:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:28.519 rmmod nvme_tcp 00:19:28.519 rmmod nvme_fabrics 00:19:28.519 rmmod nvme_keyring 00:19:28.778 11:32:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:28.778 11:32:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:19:28.778 11:32:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:19:28.778 11:32:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 86578 ']' 00:19:28.778 11:32:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 86578 00:19:28.778 11:32:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 86578 ']' 00:19:28.778 11:32:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 86578 00:19:28.778 11:32:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:19:28.778 11:32:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:28.778 11:32:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86578 00:19:28.778 11:32:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:28.778 11:32:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:28.778 killing process with pid 86578 00:19:28.778 11:32:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86578' 00:19:28.778 11:32:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 86578 00:19:28.778 11:32:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 86578 00:19:28.778 11:32:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:28.778 11:32:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:28.778 11:32:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:28.778 11:32:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:19:28.778 11:32:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:19:28.778 11:32:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:19:28.778 11:32:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:28.778 11:32:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:28.778 11:32:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:28.778 11:32:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:28.778 11:32:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:28.778 11:32:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:28.778 11:32:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:28.778 11:32:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:29.037 11:32:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:29.037 11:32:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:29.037 11:32:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:29.037 11:32:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:29.037 11:32:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:29.037 11:32:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:29.037 11:32:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:29.037 11:32:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:29.037 11:32:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:29.037 11:32:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:29.037 11:32:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:29.037 11:32:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:29.037 11:32:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@300 -- # return 0 00:19:29.037 00:19:29.037 real 0m2.211s 00:19:29.037 user 0m1.647s 00:19:29.037 sys 0m0.690s 00:19:29.037 11:32:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:29.037 ************************************ 00:19:29.037 END TEST nvmf_async_init 00:19:29.037 ************************************ 00:19:29.037 11:32:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:29.037 11:32:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:19:29.037 11:32:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:29.037 11:32:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:29.037 11:32:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:29.037 ************************************ 00:19:29.037 START TEST dma 00:19:29.037 ************************************ 00:19:29.037 11:32:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:19:29.297 * Looking for test storage... 00:19:29.297 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:29.297 11:32:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:29.297 11:32:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:19:29.297 11:32:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:29.297 11:32:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:29.297 11:32:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:29.297 11:32:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:29.297 11:32:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:29.297 11:32:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:19:29.297 11:32:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:19:29.297 11:32:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:19:29.297 11:32:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:19:29.297 11:32:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:19:29.297 11:32:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:19:29.297 11:32:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:19:29.297 11:32:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:29.297 11:32:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:19:29.297 11:32:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:19:29.297 11:32:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:29.297 11:32:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:29.297 11:32:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:19:29.297 11:32:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:19:29.297 11:32:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:29.297 11:32:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:19:29.297 11:32:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:19:29.297 11:32:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:19:29.297 11:32:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:19:29.297 11:32:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:29.297 11:32:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:19:29.297 11:32:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:19:29.297 11:32:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:29.297 11:32:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:29.297 11:32:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:19:29.297 11:32:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:29.297 11:32:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:29.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:29.297 --rc genhtml_branch_coverage=1 00:19:29.297 --rc genhtml_function_coverage=1 00:19:29.297 --rc genhtml_legend=1 00:19:29.297 --rc geninfo_all_blocks=1 00:19:29.297 --rc geninfo_unexecuted_blocks=1 00:19:29.297 00:19:29.297 ' 00:19:29.297 11:32:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:29.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:29.297 --rc genhtml_branch_coverage=1 00:19:29.297 --rc genhtml_function_coverage=1 00:19:29.297 --rc genhtml_legend=1 00:19:29.297 --rc geninfo_all_blocks=1 00:19:29.297 --rc geninfo_unexecuted_blocks=1 00:19:29.297 00:19:29.297 ' 00:19:29.297 11:32:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:29.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:29.297 --rc genhtml_branch_coverage=1 00:19:29.297 --rc genhtml_function_coverage=1 00:19:29.297 --rc genhtml_legend=1 00:19:29.297 --rc geninfo_all_blocks=1 00:19:29.297 --rc geninfo_unexecuted_blocks=1 00:19:29.297 00:19:29.297 ' 00:19:29.297 11:32:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:29.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:29.297 --rc genhtml_branch_coverage=1 00:19:29.297 --rc genhtml_function_coverage=1 00:19:29.297 --rc genhtml_legend=1 00:19:29.297 --rc geninfo_all_blocks=1 00:19:29.297 --rc geninfo_unexecuted_blocks=1 00:19:29.297 00:19:29.297 ' 00:19:29.297 11:32:14 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:29.297 11:32:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:19:29.297 11:32:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:29.297 11:32:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:29.297 11:32:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:29.297 11:32:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:29.297 11:32:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:29.297 11:32:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:29.297 11:32:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:29.297 11:32:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:29.297 11:32:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:29.297 11:32:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:29.297 11:32:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:19:29.297 11:32:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=806d442c-08ba-4719-b76d-33169283459a 00:19:29.297 11:32:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:29.297 11:32:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:29.297 11:32:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:29.297 11:32:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:29.297 11:32:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:29.297 11:32:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:19:29.297 11:32:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:29.298 11:32:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:29.298 11:32:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:29.298 11:32:14 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:29.298 11:32:14 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:29.298 11:32:14 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:29.298 11:32:14 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:19:29.298 11:32:14 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:29.298 11:32:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:19:29.298 11:32:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:29.298 11:32:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:29.298 11:32:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:29.298 11:32:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:29.298 11:32:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:29.298 11:32:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:29.298 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:29.298 11:32:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:29.298 11:32:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:29.298 11:32:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:29.298 11:32:14 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:19:29.298 11:32:14 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:19:29.298 00:19:29.298 real 0m0.211s 00:19:29.298 user 0m0.132s 00:19:29.298 sys 0m0.091s 00:19:29.298 11:32:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:29.298 11:32:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:19:29.298 ************************************ 00:19:29.298 END TEST dma 00:19:29.298 ************************************ 00:19:29.298 11:32:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:19:29.298 11:32:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:29.298 11:32:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:29.298 11:32:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:29.298 ************************************ 00:19:29.298 START TEST nvmf_identify 00:19:29.298 ************************************ 00:19:29.298 11:32:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:19:29.558 * Looking for test storage... 00:19:29.558 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:29.558 11:32:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:29.558 11:32:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:19:29.558 11:32:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:29.558 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:29.558 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:29.558 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:29.558 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:29.558 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:19:29.558 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:19:29.558 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:19:29.558 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:19:29.558 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:19:29.558 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:19:29.558 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:19:29.558 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:29.558 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:19:29.558 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:19:29.558 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:29.558 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:29.558 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:19:29.558 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:19:29.558 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:29.558 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:19:29.558 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:19:29.558 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:19:29.558 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:19:29.558 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:29.558 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:19:29.558 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:19:29.558 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:29.558 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:29.558 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:19:29.558 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:29.558 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:29.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:29.558 --rc genhtml_branch_coverage=1 00:19:29.558 --rc genhtml_function_coverage=1 00:19:29.558 --rc genhtml_legend=1 00:19:29.558 --rc geninfo_all_blocks=1 00:19:29.558 --rc geninfo_unexecuted_blocks=1 00:19:29.558 00:19:29.558 ' 00:19:29.558 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:29.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:29.558 --rc genhtml_branch_coverage=1 00:19:29.558 --rc genhtml_function_coverage=1 00:19:29.558 --rc genhtml_legend=1 00:19:29.558 --rc geninfo_all_blocks=1 00:19:29.558 --rc geninfo_unexecuted_blocks=1 00:19:29.558 00:19:29.558 ' 00:19:29.558 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:29.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:29.558 --rc genhtml_branch_coverage=1 00:19:29.558 --rc genhtml_function_coverage=1 00:19:29.558 --rc genhtml_legend=1 00:19:29.558 --rc geninfo_all_blocks=1 00:19:29.558 --rc geninfo_unexecuted_blocks=1 00:19:29.558 00:19:29.558 ' 00:19:29.558 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:29.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:29.558 --rc genhtml_branch_coverage=1 00:19:29.558 --rc genhtml_function_coverage=1 00:19:29.558 --rc genhtml_legend=1 00:19:29.558 --rc geninfo_all_blocks=1 00:19:29.558 --rc geninfo_unexecuted_blocks=1 00:19:29.558 00:19:29.558 ' 00:19:29.558 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:29.558 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:19:29.558 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:29.558 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:29.558 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:29.558 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:29.558 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:29.558 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:29.558 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:29.558 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:29.558 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:29.558 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:29.558 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:19:29.558 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=806d442c-08ba-4719-b76d-33169283459a 00:19:29.558 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:29.558 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:29.558 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:29.558 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:29.558 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:29.558 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:19:29.558 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:29.558 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:29.558 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:29.558 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:29.558 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:29.558 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:29.558 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:19:29.558 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:29.558 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:19:29.558 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:29.558 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:29.558 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:29.558 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:29.558 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:29.558 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:29.558 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:29.558 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:29.558 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:29.559 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:29.559 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:29.559 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:29.559 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:19:29.559 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:29.559 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:29.559 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:29.559 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:29.559 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:29.559 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:29.559 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:29.559 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:29.559 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:29.559 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:29.559 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:29.559 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:29.559 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:29.559 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:29.559 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:29.559 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:29.559 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:29.559 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:29.559 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:29.559 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:29.559 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:29.559 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:29.559 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:29.559 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:29.559 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:29.559 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:29.559 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:29.559 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:29.559 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:29.559 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:29.559 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:29.559 Cannot find device "nvmf_init_br" 00:19:29.559 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:19:29.559 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:29.559 Cannot find device "nvmf_init_br2" 00:19:29.559 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:19:29.559 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:29.559 Cannot find device "nvmf_tgt_br" 00:19:29.559 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:19:29.559 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:29.559 Cannot find device "nvmf_tgt_br2" 00:19:29.559 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:19:29.559 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:29.559 Cannot find device "nvmf_init_br" 00:19:29.559 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:19:29.559 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:29.559 Cannot find device "nvmf_init_br2" 00:19:29.559 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:19:29.559 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:29.818 Cannot find device "nvmf_tgt_br" 00:19:29.818 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:19:29.818 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:29.818 Cannot find device "nvmf_tgt_br2" 00:19:29.818 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:19:29.818 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:29.818 Cannot find device "nvmf_br" 00:19:29.818 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:19:29.818 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:29.818 Cannot find device "nvmf_init_if" 00:19:29.818 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:19:29.818 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:29.818 Cannot find device "nvmf_init_if2" 00:19:29.818 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:19:29.818 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:29.818 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:29.818 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:19:29.818 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:29.818 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:29.818 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:19:29.818 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:29.818 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:29.818 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:29.818 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:29.818 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:29.818 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:29.818 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:29.818 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:29.818 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:29.818 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:29.818 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:29.818 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:29.818 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:29.818 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:29.818 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:29.818 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:29.818 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:30.078 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:30.078 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:30.078 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:30.078 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:30.078 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:30.078 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:30.078 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:30.078 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:30.078 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:30.078 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:30.078 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:30.078 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:30.078 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:30.078 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:30.078 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:30.078 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:30.078 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:30.078 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.105 ms 00:19:30.078 00:19:30.078 --- 10.0.0.3 ping statistics --- 00:19:30.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:30.078 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:19:30.078 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:30.078 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:30.078 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.062 ms 00:19:30.078 00:19:30.078 --- 10.0.0.4 ping statistics --- 00:19:30.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:30.078 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:19:30.078 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:30.078 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:30.078 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:19:30.078 00:19:30.078 --- 10.0.0.1 ping statistics --- 00:19:30.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:30.078 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:19:30.078 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:30.078 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:30.078 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:19:30.078 00:19:30.078 --- 10.0.0.2 ping statistics --- 00:19:30.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:30.078 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:19:30.078 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:30.078 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@461 -- # return 0 00:19:30.078 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:30.078 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:30.078 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:30.078 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:30.078 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:30.078 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:30.078 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:30.078 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:19:30.078 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:30.078 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:30.078 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=86896 00:19:30.078 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:30.078 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:30.078 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 86896 00:19:30.078 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 86896 ']' 00:19:30.078 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:30.078 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:30.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:30.078 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:30.078 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:30.078 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:30.078 [2024-11-20 11:32:15.621770] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:19:30.078 [2024-11-20 11:32:15.621865] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:30.338 [2024-11-20 11:32:15.783410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:30.338 [2024-11-20 11:32:15.824178] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:30.338 [2024-11-20 11:32:15.824247] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:30.338 [2024-11-20 11:32:15.824261] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:30.338 [2024-11-20 11:32:15.824278] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:30.338 [2024-11-20 11:32:15.824286] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:30.338 [2024-11-20 11:32:15.825140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:30.338 [2024-11-20 11:32:15.825238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:30.338 [2024-11-20 11:32:15.825547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:30.338 [2024-11-20 11:32:15.825551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:30.338 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:30.338 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:19:30.338 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:30.338 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.338 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:30.597 [2024-11-20 11:32:15.925226] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:30.597 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.597 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:19:30.597 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:30.597 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:30.597 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:30.597 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.597 11:32:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:30.597 Malloc0 00:19:30.597 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.597 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:30.597 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.597 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:30.597 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.597 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:19:30.597 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.597 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:30.597 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.597 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:30.597 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.597 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:30.597 [2024-11-20 11:32:16.042098] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:30.597 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.597 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:19:30.597 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.597 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:30.597 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.597 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:19:30.597 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.597 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:30.597 [ 00:19:30.597 { 00:19:30.597 "allow_any_host": true, 00:19:30.597 "hosts": [], 00:19:30.597 "listen_addresses": [ 00:19:30.597 { 00:19:30.597 "adrfam": "IPv4", 00:19:30.597 "traddr": "10.0.0.3", 00:19:30.597 "trsvcid": "4420", 00:19:30.597 "trtype": "TCP" 00:19:30.597 } 00:19:30.597 ], 00:19:30.597 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:30.598 "subtype": "Discovery" 00:19:30.598 }, 00:19:30.598 { 00:19:30.598 "allow_any_host": true, 00:19:30.598 "hosts": [], 00:19:30.598 "listen_addresses": [ 00:19:30.598 { 00:19:30.598 "adrfam": "IPv4", 00:19:30.598 "traddr": "10.0.0.3", 00:19:30.598 "trsvcid": "4420", 00:19:30.598 "trtype": "TCP" 00:19:30.598 } 00:19:30.598 ], 00:19:30.598 "max_cntlid": 65519, 00:19:30.598 "max_namespaces": 32, 00:19:30.598 "min_cntlid": 1, 00:19:30.598 "model_number": "SPDK bdev Controller", 00:19:30.598 "namespaces": [ 00:19:30.598 { 00:19:30.598 "bdev_name": "Malloc0", 00:19:30.598 "eui64": "ABCDEF0123456789", 00:19:30.598 "name": "Malloc0", 00:19:30.598 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:19:30.598 "nsid": 1, 00:19:30.598 "uuid": "06c3efbb-fea1-4e70-a700-452ed13154a1" 00:19:30.598 } 00:19:30.598 ], 00:19:30.598 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:30.598 "serial_number": "SPDK00000000000001", 00:19:30.598 "subtype": "NVMe" 00:19:30.598 } 00:19:30.598 ] 00:19:30.598 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.598 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:19:30.598 [2024-11-20 11:32:16.092043] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:19:30.598 [2024-11-20 11:32:16.092090] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86937 ] 00:19:30.860 [2024-11-20 11:32:16.260185] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:19:30.860 [2024-11-20 11:32:16.260273] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:19:30.860 [2024-11-20 11:32:16.260282] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:19:30.860 [2024-11-20 11:32:16.260303] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:19:30.860 [2024-11-20 11:32:16.260315] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:19:30.861 [2024-11-20 11:32:16.260726] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:19:30.861 [2024-11-20 11:32:16.260812] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x6ecd90 0 00:19:30.861 [2024-11-20 11:32:16.265638] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:19:30.861 [2024-11-20 11:32:16.265666] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:19:30.861 [2024-11-20 11:32:16.265673] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:19:30.861 [2024-11-20 11:32:16.265677] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:19:30.861 [2024-11-20 11:32:16.265710] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.861 [2024-11-20 11:32:16.265717] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.861 [2024-11-20 11:32:16.265722] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6ecd90) 00:19:30.861 [2024-11-20 11:32:16.265737] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:19:30.861 [2024-11-20 11:32:16.265772] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x72d600, cid 0, qid 0 00:19:30.861 [2024-11-20 11:32:16.273638] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.861 [2024-11-20 11:32:16.273662] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.861 [2024-11-20 11:32:16.273668] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.861 [2024-11-20 11:32:16.273674] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x72d600) on tqpair=0x6ecd90 00:19:30.861 [2024-11-20 11:32:16.273689] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:19:30.861 [2024-11-20 11:32:16.273698] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:19:30.861 [2024-11-20 11:32:16.273705] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:19:30.861 [2024-11-20 11:32:16.273722] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.861 [2024-11-20 11:32:16.273728] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.861 [2024-11-20 11:32:16.273732] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6ecd90) 00:19:30.861 [2024-11-20 11:32:16.273742] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.861 [2024-11-20 11:32:16.273774] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x72d600, cid 0, qid 0 00:19:30.861 [2024-11-20 11:32:16.273858] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.861 [2024-11-20 11:32:16.273866] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.861 [2024-11-20 11:32:16.273870] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.861 [2024-11-20 11:32:16.273874] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x72d600) on tqpair=0x6ecd90 00:19:30.861 [2024-11-20 11:32:16.273881] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:19:30.861 [2024-11-20 11:32:16.273889] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:19:30.861 [2024-11-20 11:32:16.273898] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.861 [2024-11-20 11:32:16.273903] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.861 [2024-11-20 11:32:16.273907] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6ecd90) 00:19:30.861 [2024-11-20 11:32:16.273915] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.861 [2024-11-20 11:32:16.273936] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x72d600, cid 0, qid 0 00:19:30.861 [2024-11-20 11:32:16.273997] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.861 [2024-11-20 11:32:16.274005] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.861 [2024-11-20 11:32:16.274009] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.861 [2024-11-20 11:32:16.274013] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x72d600) on tqpair=0x6ecd90 00:19:30.861 [2024-11-20 11:32:16.274020] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:19:30.861 [2024-11-20 11:32:16.274031] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:19:30.861 [2024-11-20 11:32:16.274043] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.861 [2024-11-20 11:32:16.274049] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.861 [2024-11-20 11:32:16.274053] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6ecd90) 00:19:30.861 [2024-11-20 11:32:16.274060] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.861 [2024-11-20 11:32:16.274082] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x72d600, cid 0, qid 0 00:19:30.861 [2024-11-20 11:32:16.274147] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.861 [2024-11-20 11:32:16.274155] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.861 [2024-11-20 11:32:16.274159] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.861 [2024-11-20 11:32:16.274163] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x72d600) on tqpair=0x6ecd90 00:19:30.861 [2024-11-20 11:32:16.274169] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:30.861 [2024-11-20 11:32:16.274180] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.861 [2024-11-20 11:32:16.274185] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.861 [2024-11-20 11:32:16.274189] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6ecd90) 00:19:30.861 [2024-11-20 11:32:16.274197] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.861 [2024-11-20 11:32:16.274216] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x72d600, cid 0, qid 0 00:19:30.861 [2024-11-20 11:32:16.274274] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.861 [2024-11-20 11:32:16.274290] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.861 [2024-11-20 11:32:16.274295] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.861 [2024-11-20 11:32:16.274300] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x72d600) on tqpair=0x6ecd90 00:19:30.861 [2024-11-20 11:32:16.274307] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:19:30.861 [2024-11-20 11:32:16.274316] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:19:30.861 [2024-11-20 11:32:16.274327] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:30.861 [2024-11-20 11:32:16.274440] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:19:30.861 [2024-11-20 11:32:16.274454] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:30.861 [2024-11-20 11:32:16.274465] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.861 [2024-11-20 11:32:16.274469] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.861 [2024-11-20 11:32:16.274473] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6ecd90) 00:19:30.861 [2024-11-20 11:32:16.274482] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.861 [2024-11-20 11:32:16.274505] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x72d600, cid 0, qid 0 00:19:30.861 [2024-11-20 11:32:16.274572] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.861 [2024-11-20 11:32:16.274585] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.861 [2024-11-20 11:32:16.274589] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.861 [2024-11-20 11:32:16.274594] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x72d600) on tqpair=0x6ecd90 00:19:30.861 [2024-11-20 11:32:16.274600] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:30.861 [2024-11-20 11:32:16.274611] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.861 [2024-11-20 11:32:16.274629] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.861 [2024-11-20 11:32:16.274634] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6ecd90) 00:19:30.861 [2024-11-20 11:32:16.274642] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.861 [2024-11-20 11:32:16.274664] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x72d600, cid 0, qid 0 00:19:30.861 [2024-11-20 11:32:16.274725] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.861 [2024-11-20 11:32:16.274732] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.861 [2024-11-20 11:32:16.274736] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.861 [2024-11-20 11:32:16.274741] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x72d600) on tqpair=0x6ecd90 00:19:30.861 [2024-11-20 11:32:16.274746] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:30.861 [2024-11-20 11:32:16.274752] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:19:30.862 [2024-11-20 11:32:16.274760] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:19:30.862 [2024-11-20 11:32:16.274777] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:19:30.862 [2024-11-20 11:32:16.274789] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.862 [2024-11-20 11:32:16.274794] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6ecd90) 00:19:30.862 [2024-11-20 11:32:16.274802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.862 [2024-11-20 11:32:16.274823] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x72d600, cid 0, qid 0 00:19:30.862 [2024-11-20 11:32:16.274913] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:30.862 [2024-11-20 11:32:16.274928] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:30.862 [2024-11-20 11:32:16.274933] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:30.862 [2024-11-20 11:32:16.274938] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6ecd90): datao=0, datal=4096, cccid=0 00:19:30.862 [2024-11-20 11:32:16.274944] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x72d600) on tqpair(0x6ecd90): expected_datao=0, payload_size=4096 00:19:30.862 [2024-11-20 11:32:16.274949] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.862 [2024-11-20 11:32:16.274976] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:30.862 [2024-11-20 11:32:16.274985] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:30.862 [2024-11-20 11:32:16.274997] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.862 [2024-11-20 11:32:16.275004] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.862 [2024-11-20 11:32:16.275008] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.862 [2024-11-20 11:32:16.275012] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x72d600) on tqpair=0x6ecd90 00:19:30.862 [2024-11-20 11:32:16.275023] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:19:30.862 [2024-11-20 11:32:16.275029] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:19:30.862 [2024-11-20 11:32:16.275034] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:19:30.862 [2024-11-20 11:32:16.275040] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:19:30.862 [2024-11-20 11:32:16.275045] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:19:30.862 [2024-11-20 11:32:16.275051] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:19:30.862 [2024-11-20 11:32:16.275067] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:19:30.862 [2024-11-20 11:32:16.275077] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.862 [2024-11-20 11:32:16.275081] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.862 [2024-11-20 11:32:16.275085] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6ecd90) 00:19:30.862 [2024-11-20 11:32:16.275095] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:30.862 [2024-11-20 11:32:16.275121] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x72d600, cid 0, qid 0 00:19:30.862 [2024-11-20 11:32:16.275199] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.862 [2024-11-20 11:32:16.275212] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.862 [2024-11-20 11:32:16.275216] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.862 [2024-11-20 11:32:16.275221] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x72d600) on tqpair=0x6ecd90 00:19:30.862 [2024-11-20 11:32:16.275229] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.862 [2024-11-20 11:32:16.275234] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.862 [2024-11-20 11:32:16.275238] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6ecd90) 00:19:30.862 [2024-11-20 11:32:16.275245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.862 [2024-11-20 11:32:16.275253] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.862 [2024-11-20 11:32:16.275257] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.862 [2024-11-20 11:32:16.275261] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x6ecd90) 00:19:30.862 [2024-11-20 11:32:16.275267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.862 [2024-11-20 11:32:16.275274] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.862 [2024-11-20 11:32:16.275281] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.862 [2024-11-20 11:32:16.275288] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x6ecd90) 00:19:30.862 [2024-11-20 11:32:16.275298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.862 [2024-11-20 11:32:16.275309] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.862 [2024-11-20 11:32:16.275317] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.862 [2024-11-20 11:32:16.275323] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6ecd90) 00:19:30.862 [2024-11-20 11:32:16.275330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.862 [2024-11-20 11:32:16.275337] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:19:30.862 [2024-11-20 11:32:16.275352] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:30.862 [2024-11-20 11:32:16.275360] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.862 [2024-11-20 11:32:16.275365] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6ecd90) 00:19:30.862 [2024-11-20 11:32:16.275372] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.862 [2024-11-20 11:32:16.275397] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x72d600, cid 0, qid 0 00:19:30.862 [2024-11-20 11:32:16.275405] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x72d780, cid 1, qid 0 00:19:30.862 [2024-11-20 11:32:16.275411] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x72d900, cid 2, qid 0 00:19:30.862 [2024-11-20 11:32:16.275416] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x72da80, cid 3, qid 0 00:19:30.862 [2024-11-20 11:32:16.275421] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x72dc00, cid 4, qid 0 00:19:30.862 [2024-11-20 11:32:16.275520] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.862 [2024-11-20 11:32:16.275527] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.862 [2024-11-20 11:32:16.275531] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.862 [2024-11-20 11:32:16.275535] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x72dc00) on tqpair=0x6ecd90 00:19:30.862 [2024-11-20 11:32:16.275544] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:19:30.862 [2024-11-20 11:32:16.275553] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:19:30.862 [2024-11-20 11:32:16.275572] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.862 [2024-11-20 11:32:16.275581] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6ecd90) 00:19:30.862 [2024-11-20 11:32:16.275592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.862 [2024-11-20 11:32:16.275629] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x72dc00, cid 4, qid 0 00:19:30.862 [2024-11-20 11:32:16.275703] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:30.862 [2024-11-20 11:32:16.275712] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:30.862 [2024-11-20 11:32:16.275716] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:30.862 [2024-11-20 11:32:16.275720] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6ecd90): datao=0, datal=4096, cccid=4 00:19:30.862 [2024-11-20 11:32:16.275725] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x72dc00) on tqpair(0x6ecd90): expected_datao=0, payload_size=4096 00:19:30.862 [2024-11-20 11:32:16.275731] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.862 [2024-11-20 11:32:16.275738] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:30.862 [2024-11-20 11:32:16.275743] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:30.862 [2024-11-20 11:32:16.275752] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.862 [2024-11-20 11:32:16.275759] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.862 [2024-11-20 11:32:16.275763] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.862 [2024-11-20 11:32:16.275767] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x72dc00) on tqpair=0x6ecd90 00:19:30.862 [2024-11-20 11:32:16.275782] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:19:30.862 [2024-11-20 11:32:16.275831] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.862 [2024-11-20 11:32:16.275840] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6ecd90) 00:19:30.862 [2024-11-20 11:32:16.275849] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.862 [2024-11-20 11:32:16.275859] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.862 [2024-11-20 11:32:16.275863] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.862 [2024-11-20 11:32:16.275867] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x6ecd90) 00:19:30.862 [2024-11-20 11:32:16.275874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.863 [2024-11-20 11:32:16.275908] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x72dc00, cid 4, qid 0 00:19:30.863 [2024-11-20 11:32:16.275916] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x72dd80, cid 5, qid 0 00:19:30.863 [2024-11-20 11:32:16.276048] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:30.863 [2024-11-20 11:32:16.276061] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:30.863 [2024-11-20 11:32:16.276066] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:30.863 [2024-11-20 11:32:16.276070] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6ecd90): datao=0, datal=1024, cccid=4 00:19:30.863 [2024-11-20 11:32:16.276076] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x72dc00) on tqpair(0x6ecd90): expected_datao=0, payload_size=1024 00:19:30.863 [2024-11-20 11:32:16.276081] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.863 [2024-11-20 11:32:16.276088] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:30.863 [2024-11-20 11:32:16.276093] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:30.863 [2024-11-20 11:32:16.276099] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.863 [2024-11-20 11:32:16.276105] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.863 [2024-11-20 11:32:16.276109] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.863 [2024-11-20 11:32:16.276114] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x72dd80) on tqpair=0x6ecd90 00:19:30.863 [2024-11-20 11:32:16.316751] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.863 [2024-11-20 11:32:16.316802] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.863 [2024-11-20 11:32:16.316808] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.863 [2024-11-20 11:32:16.316815] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x72dc00) on tqpair=0x6ecd90 00:19:30.863 [2024-11-20 11:32:16.316843] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.863 [2024-11-20 11:32:16.316849] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6ecd90) 00:19:30.863 [2024-11-20 11:32:16.316865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.863 [2024-11-20 11:32:16.316912] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x72dc00, cid 4, qid 0 00:19:30.863 [2024-11-20 11:32:16.317058] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:30.863 [2024-11-20 11:32:16.317072] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:30.863 [2024-11-20 11:32:16.317077] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:30.863 [2024-11-20 11:32:16.317081] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6ecd90): datao=0, datal=3072, cccid=4 00:19:30.863 [2024-11-20 11:32:16.317087] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x72dc00) on tqpair(0x6ecd90): expected_datao=0, payload_size=3072 00:19:30.863 [2024-11-20 11:32:16.317092] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.863 [2024-11-20 11:32:16.317102] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:30.863 [2024-11-20 11:32:16.317107] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:30.863 [2024-11-20 11:32:16.317117] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.863 [2024-11-20 11:32:16.317124] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.863 [2024-11-20 11:32:16.317128] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.863 [2024-11-20 11:32:16.317132] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x72dc00) on tqpair=0x6ecd90 00:19:30.863 [2024-11-20 11:32:16.317145] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.863 [2024-11-20 11:32:16.317150] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6ecd90) 00:19:30.863 [2024-11-20 11:32:16.317158] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.863 [2024-11-20 11:32:16.317198] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x72dc00, cid 4, qid 0 00:19:30.863 [2024-11-20 11:32:16.317284] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:30.863 [2024-11-20 11:32:16.317302] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:30.863 [2024-11-20 11:32:16.317307] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:30.863 [2024-11-20 11:32:16.317311] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6ecd90): datao=0, datal=8, cccid=4 00:19:30.863 [2024-11-20 11:32:16.317317] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x72dc00) on tqpair(0x6ecd90): expected_datao=0, payload_size=8 00:19:30.863 [2024-11-20 11:32:16.317321] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.863 [2024-11-20 11:32:16.317329] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:30.863 [2024-11-20 11:32:16.317333] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:30.863 [2024-11-20 11:32:16.361658] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.863 [2024-11-20 11:32:16.361699] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.863 [2024-11-20 11:32:16.361706] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.863 [2024-11-20 11:32:16.361712] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x72dc00) on tqpair=0x6ecd90 00:19:30.863 ===================================================== 00:19:30.863 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:19:30.863 ===================================================== 00:19:30.863 Controller Capabilities/Features 00:19:30.863 ================================ 00:19:30.863 Vendor ID: 0000 00:19:30.863 Subsystem Vendor ID: 0000 00:19:30.863 Serial Number: .................... 00:19:30.863 Model Number: ........................................ 00:19:30.863 Firmware Version: 25.01 00:19:30.863 Recommended Arb Burst: 0 00:19:30.863 IEEE OUI Identifier: 00 00 00 00:19:30.863 Multi-path I/O 00:19:30.863 May have multiple subsystem ports: No 00:19:30.863 May have multiple controllers: No 00:19:30.863 Associated with SR-IOV VF: No 00:19:30.863 Max Data Transfer Size: 131072 00:19:30.863 Max Number of Namespaces: 0 00:19:30.863 Max Number of I/O Queues: 1024 00:19:30.863 NVMe Specification Version (VS): 1.3 00:19:30.863 NVMe Specification Version (Identify): 1.3 00:19:30.863 Maximum Queue Entries: 128 00:19:30.863 Contiguous Queues Required: Yes 00:19:30.863 Arbitration Mechanisms Supported 00:19:30.863 Weighted Round Robin: Not Supported 00:19:30.863 Vendor Specific: Not Supported 00:19:30.863 Reset Timeout: 15000 ms 00:19:30.863 Doorbell Stride: 4 bytes 00:19:30.863 NVM Subsystem Reset: Not Supported 00:19:30.863 Command Sets Supported 00:19:30.863 NVM Command Set: Supported 00:19:30.863 Boot Partition: Not Supported 00:19:30.863 Memory Page Size Minimum: 4096 bytes 00:19:30.863 Memory Page Size Maximum: 4096 bytes 00:19:30.863 Persistent Memory Region: Not Supported 00:19:30.863 Optional Asynchronous Events Supported 00:19:30.863 Namespace Attribute Notices: Not Supported 00:19:30.863 Firmware Activation Notices: Not Supported 00:19:30.863 ANA Change Notices: Not Supported 00:19:30.863 PLE Aggregate Log Change Notices: Not Supported 00:19:30.863 LBA Status Info Alert Notices: Not Supported 00:19:30.863 EGE Aggregate Log Change Notices: Not Supported 00:19:30.863 Normal NVM Subsystem Shutdown event: Not Supported 00:19:30.863 Zone Descriptor Change Notices: Not Supported 00:19:30.863 Discovery Log Change Notices: Supported 00:19:30.863 Controller Attributes 00:19:30.863 128-bit Host Identifier: Not Supported 00:19:30.863 Non-Operational Permissive Mode: Not Supported 00:19:30.863 NVM Sets: Not Supported 00:19:30.863 Read Recovery Levels: Not Supported 00:19:30.863 Endurance Groups: Not Supported 00:19:30.863 Predictable Latency Mode: Not Supported 00:19:30.863 Traffic Based Keep ALive: Not Supported 00:19:30.863 Namespace Granularity: Not Supported 00:19:30.863 SQ Associations: Not Supported 00:19:30.863 UUID List: Not Supported 00:19:30.864 Multi-Domain Subsystem: Not Supported 00:19:30.864 Fixed Capacity Management: Not Supported 00:19:30.864 Variable Capacity Management: Not Supported 00:19:30.864 Delete Endurance Group: Not Supported 00:19:30.864 Delete NVM Set: Not Supported 00:19:30.864 Extended LBA Formats Supported: Not Supported 00:19:30.864 Flexible Data Placement Supported: Not Supported 00:19:30.864 00:19:30.864 Controller Memory Buffer Support 00:19:30.864 ================================ 00:19:30.864 Supported: No 00:19:30.864 00:19:30.864 Persistent Memory Region Support 00:19:30.864 ================================ 00:19:30.864 Supported: No 00:19:30.864 00:19:30.864 Admin Command Set Attributes 00:19:30.864 ============================ 00:19:30.864 Security Send/Receive: Not Supported 00:19:30.864 Format NVM: Not Supported 00:19:30.864 Firmware Activate/Download: Not Supported 00:19:30.864 Namespace Management: Not Supported 00:19:30.864 Device Self-Test: Not Supported 00:19:30.864 Directives: Not Supported 00:19:30.864 NVMe-MI: Not Supported 00:19:30.864 Virtualization Management: Not Supported 00:19:30.864 Doorbell Buffer Config: Not Supported 00:19:30.864 Get LBA Status Capability: Not Supported 00:19:30.864 Command & Feature Lockdown Capability: Not Supported 00:19:30.864 Abort Command Limit: 1 00:19:30.864 Async Event Request Limit: 4 00:19:30.864 Number of Firmware Slots: N/A 00:19:30.864 Firmware Slot 1 Read-Only: N/A 00:19:30.864 Firmware Activation Without Reset: N/A 00:19:30.864 Multiple Update Detection Support: N/A 00:19:30.864 Firmware Update Granularity: No Information Provided 00:19:30.864 Per-Namespace SMART Log: No 00:19:30.864 Asymmetric Namespace Access Log Page: Not Supported 00:19:30.864 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:19:30.864 Command Effects Log Page: Not Supported 00:19:30.864 Get Log Page Extended Data: Supported 00:19:30.864 Telemetry Log Pages: Not Supported 00:19:30.864 Persistent Event Log Pages: Not Supported 00:19:30.864 Supported Log Pages Log Page: May Support 00:19:30.864 Commands Supported & Effects Log Page: Not Supported 00:19:30.864 Feature Identifiers & Effects Log Page:May Support 00:19:30.864 NVMe-MI Commands & Effects Log Page: May Support 00:19:30.864 Data Area 4 for Telemetry Log: Not Supported 00:19:30.864 Error Log Page Entries Supported: 128 00:19:30.864 Keep Alive: Not Supported 00:19:30.864 00:19:30.864 NVM Command Set Attributes 00:19:30.864 ========================== 00:19:30.864 Submission Queue Entry Size 00:19:30.864 Max: 1 00:19:30.864 Min: 1 00:19:30.864 Completion Queue Entry Size 00:19:30.864 Max: 1 00:19:30.864 Min: 1 00:19:30.864 Number of Namespaces: 0 00:19:30.864 Compare Command: Not Supported 00:19:30.864 Write Uncorrectable Command: Not Supported 00:19:30.864 Dataset Management Command: Not Supported 00:19:30.864 Write Zeroes Command: Not Supported 00:19:30.864 Set Features Save Field: Not Supported 00:19:30.864 Reservations: Not Supported 00:19:30.864 Timestamp: Not Supported 00:19:30.864 Copy: Not Supported 00:19:30.864 Volatile Write Cache: Not Present 00:19:30.864 Atomic Write Unit (Normal): 1 00:19:30.864 Atomic Write Unit (PFail): 1 00:19:30.864 Atomic Compare & Write Unit: 1 00:19:30.864 Fused Compare & Write: Supported 00:19:30.864 Scatter-Gather List 00:19:30.864 SGL Command Set: Supported 00:19:30.864 SGL Keyed: Supported 00:19:30.864 SGL Bit Bucket Descriptor: Not Supported 00:19:30.864 SGL Metadata Pointer: Not Supported 00:19:30.864 Oversized SGL: Not Supported 00:19:30.864 SGL Metadata Address: Not Supported 00:19:30.864 SGL Offset: Supported 00:19:30.864 Transport SGL Data Block: Not Supported 00:19:30.864 Replay Protected Memory Block: Not Supported 00:19:30.864 00:19:30.864 Firmware Slot Information 00:19:30.864 ========================= 00:19:30.864 Active slot: 0 00:19:30.864 00:19:30.864 00:19:30.864 Error Log 00:19:30.864 ========= 00:19:30.864 00:19:30.864 Active Namespaces 00:19:30.864 ================= 00:19:30.864 Discovery Log Page 00:19:30.864 ================== 00:19:30.864 Generation Counter: 2 00:19:30.864 Number of Records: 2 00:19:30.864 Record Format: 0 00:19:30.864 00:19:30.864 Discovery Log Entry 0 00:19:30.864 ---------------------- 00:19:30.864 Transport Type: 3 (TCP) 00:19:30.864 Address Family: 1 (IPv4) 00:19:30.864 Subsystem Type: 3 (Current Discovery Subsystem) 00:19:30.864 Entry Flags: 00:19:30.864 Duplicate Returned Information: 1 00:19:30.864 Explicit Persistent Connection Support for Discovery: 1 00:19:30.864 Transport Requirements: 00:19:30.864 Secure Channel: Not Required 00:19:30.864 Port ID: 0 (0x0000) 00:19:30.864 Controller ID: 65535 (0xffff) 00:19:30.864 Admin Max SQ Size: 128 00:19:30.864 Transport Service Identifier: 4420 00:19:30.864 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:19:30.864 Transport Address: 10.0.0.3 00:19:30.864 Discovery Log Entry 1 00:19:30.864 ---------------------- 00:19:30.864 Transport Type: 3 (TCP) 00:19:30.864 Address Family: 1 (IPv4) 00:19:30.864 Subsystem Type: 2 (NVM Subsystem) 00:19:30.864 Entry Flags: 00:19:30.864 Duplicate Returned Information: 0 00:19:30.864 Explicit Persistent Connection Support for Discovery: 0 00:19:30.864 Transport Requirements: 00:19:30.864 Secure Channel: Not Required 00:19:30.864 Port ID: 0 (0x0000) 00:19:30.864 Controller ID: 65535 (0xffff) 00:19:30.864 Admin Max SQ Size: 128 00:19:30.864 Transport Service Identifier: 4420 00:19:30.864 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:19:30.864 Transport Address: 10.0.0.3 [2024-11-20 11:32:16.361869] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:19:30.864 [2024-11-20 11:32:16.361889] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x72d600) on tqpair=0x6ecd90 00:19:30.864 [2024-11-20 11:32:16.361899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.864 [2024-11-20 11:32:16.361907] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x72d780) on tqpair=0x6ecd90 00:19:30.864 [2024-11-20 11:32:16.361912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.864 [2024-11-20 11:32:16.361917] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x72d900) on tqpair=0x6ecd90 00:19:30.864 [2024-11-20 11:32:16.361923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.864 [2024-11-20 11:32:16.361928] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x72da80) on tqpair=0x6ecd90 00:19:30.864 [2024-11-20 11:32:16.361934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.865 [2024-11-20 11:32:16.361949] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.865 [2024-11-20 11:32:16.361954] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.865 [2024-11-20 11:32:16.361958] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6ecd90) 00:19:30.865 [2024-11-20 11:32:16.361971] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.865 [2024-11-20 11:32:16.362004] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x72da80, cid 3, qid 0 00:19:30.865 [2024-11-20 11:32:16.362096] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.865 [2024-11-20 11:32:16.362104] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.865 [2024-11-20 11:32:16.362108] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.865 [2024-11-20 11:32:16.362113] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x72da80) on tqpair=0x6ecd90 00:19:30.865 [2024-11-20 11:32:16.362122] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.865 [2024-11-20 11:32:16.362127] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.865 [2024-11-20 11:32:16.362131] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6ecd90) 00:19:30.865 [2024-11-20 11:32:16.362139] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.865 [2024-11-20 11:32:16.362165] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x72da80, cid 3, qid 0 00:19:30.865 [2024-11-20 11:32:16.362250] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.865 [2024-11-20 11:32:16.362258] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.865 [2024-11-20 11:32:16.362262] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.865 [2024-11-20 11:32:16.362266] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x72da80) on tqpair=0x6ecd90 00:19:30.865 [2024-11-20 11:32:16.362272] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:19:30.865 [2024-11-20 11:32:16.362277] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:19:30.865 [2024-11-20 11:32:16.362288] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.865 [2024-11-20 11:32:16.362294] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.865 [2024-11-20 11:32:16.362298] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6ecd90) 00:19:30.865 [2024-11-20 11:32:16.362305] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.865 [2024-11-20 11:32:16.362325] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x72da80, cid 3, qid 0 00:19:30.865 [2024-11-20 11:32:16.362386] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.865 [2024-11-20 11:32:16.362395] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.865 [2024-11-20 11:32:16.362399] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.865 [2024-11-20 11:32:16.362403] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x72da80) on tqpair=0x6ecd90 00:19:30.865 [2024-11-20 11:32:16.362416] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.865 [2024-11-20 11:32:16.362421] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.865 [2024-11-20 11:32:16.362425] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6ecd90) 00:19:30.865 [2024-11-20 11:32:16.362433] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.865 [2024-11-20 11:32:16.362454] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x72da80, cid 3, qid 0 00:19:30.865 [2024-11-20 11:32:16.362513] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.865 [2024-11-20 11:32:16.362520] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.865 [2024-11-20 11:32:16.362525] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.865 [2024-11-20 11:32:16.362529] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x72da80) on tqpair=0x6ecd90 00:19:30.865 [2024-11-20 11:32:16.362540] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.865 [2024-11-20 11:32:16.362546] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.865 [2024-11-20 11:32:16.362550] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6ecd90) 00:19:30.865 [2024-11-20 11:32:16.362557] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.865 [2024-11-20 11:32:16.362575] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x72da80, cid 3, qid 0 00:19:30.865 [2024-11-20 11:32:16.362650] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.865 [2024-11-20 11:32:16.362659] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.865 [2024-11-20 11:32:16.362663] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.865 [2024-11-20 11:32:16.362667] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x72da80) on tqpair=0x6ecd90 00:19:30.865 [2024-11-20 11:32:16.362678] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.865 [2024-11-20 11:32:16.362684] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.865 [2024-11-20 11:32:16.362688] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6ecd90) 00:19:30.865 [2024-11-20 11:32:16.362696] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.865 [2024-11-20 11:32:16.362722] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x72da80, cid 3, qid 0 00:19:30.865 [2024-11-20 11:32:16.362781] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.865 [2024-11-20 11:32:16.362788] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.865 [2024-11-20 11:32:16.362792] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.865 [2024-11-20 11:32:16.362796] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x72da80) on tqpair=0x6ecd90 00:19:30.865 [2024-11-20 11:32:16.362807] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.865 [2024-11-20 11:32:16.362813] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.865 [2024-11-20 11:32:16.362817] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6ecd90) 00:19:30.865 [2024-11-20 11:32:16.362824] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.865 [2024-11-20 11:32:16.362842] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x72da80, cid 3, qid 0 00:19:30.865 [2024-11-20 11:32:16.362898] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.865 [2024-11-20 11:32:16.362907] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.865 [2024-11-20 11:32:16.362912] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.865 [2024-11-20 11:32:16.362916] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x72da80) on tqpair=0x6ecd90 00:19:30.865 [2024-11-20 11:32:16.362927] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.865 [2024-11-20 11:32:16.362932] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.865 [2024-11-20 11:32:16.362936] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6ecd90) 00:19:30.865 [2024-11-20 11:32:16.362944] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.865 [2024-11-20 11:32:16.362962] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x72da80, cid 3, qid 0 00:19:30.865 [2024-11-20 11:32:16.363020] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.865 [2024-11-20 11:32:16.363027] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.865 [2024-11-20 11:32:16.363031] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.865 [2024-11-20 11:32:16.363035] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x72da80) on tqpair=0x6ecd90 00:19:30.865 [2024-11-20 11:32:16.363046] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.865 [2024-11-20 11:32:16.363051] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.865 [2024-11-20 11:32:16.363055] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6ecd90) 00:19:30.865 [2024-11-20 11:32:16.363062] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.865 [2024-11-20 11:32:16.363080] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x72da80, cid 3, qid 0 00:19:30.865 [2024-11-20 11:32:16.363133] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.865 [2024-11-20 11:32:16.363155] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.865 [2024-11-20 11:32:16.363160] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.865 [2024-11-20 11:32:16.363164] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x72da80) on tqpair=0x6ecd90 00:19:30.865 [2024-11-20 11:32:16.363176] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.865 [2024-11-20 11:32:16.363181] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.865 [2024-11-20 11:32:16.363185] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6ecd90) 00:19:30.865 [2024-11-20 11:32:16.363193] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.865 [2024-11-20 11:32:16.363213] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x72da80, cid 3, qid 0 00:19:30.865 [2024-11-20 11:32:16.363270] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.865 [2024-11-20 11:32:16.363277] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.865 [2024-11-20 11:32:16.363281] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.865 [2024-11-20 11:32:16.363285] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x72da80) on tqpair=0x6ecd90 00:19:30.866 [2024-11-20 11:32:16.363296] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.866 [2024-11-20 11:32:16.363301] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.866 [2024-11-20 11:32:16.363305] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6ecd90) 00:19:30.866 [2024-11-20 11:32:16.363313] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.866 [2024-11-20 11:32:16.363331] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x72da80, cid 3, qid 0 00:19:30.866 [2024-11-20 11:32:16.363388] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.866 [2024-11-20 11:32:16.363395] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.866 [2024-11-20 11:32:16.363399] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.866 [2024-11-20 11:32:16.363404] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x72da80) on tqpair=0x6ecd90 00:19:30.866 [2024-11-20 11:32:16.363414] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.866 [2024-11-20 11:32:16.363420] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.866 [2024-11-20 11:32:16.363424] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6ecd90) 00:19:30.866 [2024-11-20 11:32:16.363431] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.866 [2024-11-20 11:32:16.363449] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x72da80, cid 3, qid 0 00:19:30.866 [2024-11-20 11:32:16.363504] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.866 [2024-11-20 11:32:16.363511] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.866 [2024-11-20 11:32:16.363515] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.866 [2024-11-20 11:32:16.363520] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x72da80) on tqpair=0x6ecd90 00:19:30.866 [2024-11-20 11:32:16.363530] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.866 [2024-11-20 11:32:16.363536] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.866 [2024-11-20 11:32:16.363540] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6ecd90) 00:19:30.866 [2024-11-20 11:32:16.363547] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.866 [2024-11-20 11:32:16.363565] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x72da80, cid 3, qid 0 00:19:30.866 [2024-11-20 11:32:16.363631] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.866 [2024-11-20 11:32:16.363641] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.866 [2024-11-20 11:32:16.363645] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.866 [2024-11-20 11:32:16.363649] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x72da80) on tqpair=0x6ecd90 00:19:30.866 [2024-11-20 11:32:16.363661] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.866 [2024-11-20 11:32:16.363666] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.866 [2024-11-20 11:32:16.363670] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6ecd90) 00:19:30.866 [2024-11-20 11:32:16.363678] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.866 [2024-11-20 11:32:16.363700] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x72da80, cid 3, qid 0 00:19:30.866 [2024-11-20 11:32:16.363759] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.866 [2024-11-20 11:32:16.363766] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.866 [2024-11-20 11:32:16.363770] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.866 [2024-11-20 11:32:16.363774] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x72da80) on tqpair=0x6ecd90 00:19:30.866 [2024-11-20 11:32:16.363785] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.866 [2024-11-20 11:32:16.363790] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.866 [2024-11-20 11:32:16.363794] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6ecd90) 00:19:30.866 [2024-11-20 11:32:16.363802] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.866 [2024-11-20 11:32:16.363820] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x72da80, cid 3, qid 0 00:19:30.866 [2024-11-20 11:32:16.363873] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.866 [2024-11-20 11:32:16.363880] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.866 [2024-11-20 11:32:16.363884] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.866 [2024-11-20 11:32:16.363888] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x72da80) on tqpair=0x6ecd90 00:19:30.866 [2024-11-20 11:32:16.363899] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.866 [2024-11-20 11:32:16.363904] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.866 [2024-11-20 11:32:16.363908] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6ecd90) 00:19:30.866 [2024-11-20 11:32:16.363916] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.866 [2024-11-20 11:32:16.363933] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x72da80, cid 3, qid 0 00:19:30.866 [2024-11-20 11:32:16.363987] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.866 [2024-11-20 11:32:16.363995] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.866 [2024-11-20 11:32:16.363999] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.866 [2024-11-20 11:32:16.364003] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x72da80) on tqpair=0x6ecd90 00:19:30.866 [2024-11-20 11:32:16.364015] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.866 [2024-11-20 11:32:16.364020] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.866 [2024-11-20 11:32:16.364024] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6ecd90) 00:19:30.866 [2024-11-20 11:32:16.364031] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.866 [2024-11-20 11:32:16.364050] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x72da80, cid 3, qid 0 00:19:30.866 [2024-11-20 11:32:16.364104] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.866 [2024-11-20 11:32:16.364112] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.866 [2024-11-20 11:32:16.364116] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.866 [2024-11-20 11:32:16.364120] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x72da80) on tqpair=0x6ecd90 00:19:30.866 [2024-11-20 11:32:16.364131] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.866 [2024-11-20 11:32:16.364136] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.866 [2024-11-20 11:32:16.364140] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6ecd90) 00:19:30.867 [2024-11-20 11:32:16.364148] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.867 [2024-11-20 11:32:16.364165] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x72da80, cid 3, qid 0 00:19:30.867 [2024-11-20 11:32:16.364223] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.867 [2024-11-20 11:32:16.364230] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.867 [2024-11-20 11:32:16.364234] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.867 [2024-11-20 11:32:16.364238] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x72da80) on tqpair=0x6ecd90 00:19:30.867 [2024-11-20 11:32:16.364249] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.867 [2024-11-20 11:32:16.364254] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.867 [2024-11-20 11:32:16.364258] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6ecd90) 00:19:30.867 [2024-11-20 11:32:16.364266] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.867 [2024-11-20 11:32:16.364283] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x72da80, cid 3, qid 0 00:19:30.867 [2024-11-20 11:32:16.364338] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.867 [2024-11-20 11:32:16.364346] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.867 [2024-11-20 11:32:16.364350] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.867 [2024-11-20 11:32:16.364354] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x72da80) on tqpair=0x6ecd90 00:19:30.867 [2024-11-20 11:32:16.364365] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.867 [2024-11-20 11:32:16.364371] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.867 [2024-11-20 11:32:16.364375] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6ecd90) 00:19:30.867 [2024-11-20 11:32:16.364382] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.867 [2024-11-20 11:32:16.364400] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x72da80, cid 3, qid 0 00:19:30.867 [2024-11-20 11:32:16.364458] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.867 [2024-11-20 11:32:16.364465] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.867 [2024-11-20 11:32:16.364469] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.867 [2024-11-20 11:32:16.364473] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x72da80) on tqpair=0x6ecd90 00:19:30.867 [2024-11-20 11:32:16.364484] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.867 [2024-11-20 11:32:16.364489] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.867 [2024-11-20 11:32:16.364493] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6ecd90) 00:19:30.867 [2024-11-20 11:32:16.364501] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.867 [2024-11-20 11:32:16.364518] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x72da80, cid 3, qid 0 00:19:30.867 [2024-11-20 11:32:16.364575] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.867 [2024-11-20 11:32:16.364582] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.867 [2024-11-20 11:32:16.364586] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.867 [2024-11-20 11:32:16.364590] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x72da80) on tqpair=0x6ecd90 00:19:30.867 [2024-11-20 11:32:16.364601] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.867 [2024-11-20 11:32:16.364606] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.867 [2024-11-20 11:32:16.364610] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6ecd90) 00:19:30.867 [2024-11-20 11:32:16.364631] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.867 [2024-11-20 11:32:16.364653] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x72da80, cid 3, qid 0 00:19:30.867 [2024-11-20 11:32:16.364711] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.867 [2024-11-20 11:32:16.364719] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.867 [2024-11-20 11:32:16.364723] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.867 [2024-11-20 11:32:16.364727] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x72da80) on tqpair=0x6ecd90 00:19:30.867 [2024-11-20 11:32:16.364738] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.867 [2024-11-20 11:32:16.364743] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.867 [2024-11-20 11:32:16.364747] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6ecd90) 00:19:30.867 [2024-11-20 11:32:16.364756] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.867 [2024-11-20 11:32:16.364783] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x72da80, cid 3, qid 0 00:19:30.867 [2024-11-20 11:32:16.364840] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.867 [2024-11-20 11:32:16.364847] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.867 [2024-11-20 11:32:16.364851] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.867 [2024-11-20 11:32:16.364855] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x72da80) on tqpair=0x6ecd90 00:19:30.867 [2024-11-20 11:32:16.364867] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.867 [2024-11-20 11:32:16.364872] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.867 [2024-11-20 11:32:16.364876] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6ecd90) 00:19:30.867 [2024-11-20 11:32:16.364883] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.867 [2024-11-20 11:32:16.364902] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x72da80, cid 3, qid 0 00:19:30.867 [2024-11-20 11:32:16.364955] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.867 [2024-11-20 11:32:16.364962] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.867 [2024-11-20 11:32:16.364966] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.867 [2024-11-20 11:32:16.364970] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x72da80) on tqpair=0x6ecd90 00:19:30.867 [2024-11-20 11:32:16.364981] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.867 [2024-11-20 11:32:16.364986] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.867 [2024-11-20 11:32:16.364990] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6ecd90) 00:19:30.867 [2024-11-20 11:32:16.364997] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.867 [2024-11-20 11:32:16.365015] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x72da80, cid 3, qid 0 00:19:30.867 [2024-11-20 11:32:16.365077] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.867 [2024-11-20 11:32:16.365084] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.867 [2024-11-20 11:32:16.365088] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.867 [2024-11-20 11:32:16.365092] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x72da80) on tqpair=0x6ecd90 00:19:30.867 [2024-11-20 11:32:16.365103] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.867 [2024-11-20 11:32:16.365108] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.867 [2024-11-20 11:32:16.365112] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6ecd90) 00:19:30.867 [2024-11-20 11:32:16.365120] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.867 [2024-11-20 11:32:16.365137] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x72da80, cid 3, qid 0 00:19:30.867 [2024-11-20 11:32:16.365192] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.867 [2024-11-20 11:32:16.365199] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.867 [2024-11-20 11:32:16.365203] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.867 [2024-11-20 11:32:16.365207] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x72da80) on tqpair=0x6ecd90 00:19:30.867 [2024-11-20 11:32:16.365218] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.867 [2024-11-20 11:32:16.365224] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.867 [2024-11-20 11:32:16.365228] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6ecd90) 00:19:30.867 [2024-11-20 11:32:16.365235] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.867 [2024-11-20 11:32:16.365253] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x72da80, cid 3, qid 0 00:19:30.867 [2024-11-20 11:32:16.365309] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.867 [2024-11-20 11:32:16.365316] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.867 [2024-11-20 11:32:16.365320] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.867 [2024-11-20 11:32:16.365324] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x72da80) on tqpair=0x6ecd90 00:19:30.867 [2024-11-20 11:32:16.365335] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.867 [2024-11-20 11:32:16.365340] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.868 [2024-11-20 11:32:16.365344] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6ecd90) 00:19:30.868 [2024-11-20 11:32:16.365352] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.868 [2024-11-20 11:32:16.365369] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x72da80, cid 3, qid 0 00:19:30.868 [2024-11-20 11:32:16.365428] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.868 [2024-11-20 11:32:16.365435] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.868 [2024-11-20 11:32:16.365439] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.868 [2024-11-20 11:32:16.365443] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x72da80) on tqpair=0x6ecd90 00:19:30.868 [2024-11-20 11:32:16.365454] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.868 [2024-11-20 11:32:16.365459] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.868 [2024-11-20 11:32:16.365463] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6ecd90) 00:19:30.868 [2024-11-20 11:32:16.365471] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.868 [2024-11-20 11:32:16.365489] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x72da80, cid 3, qid 0 00:19:30.868 [2024-11-20 11:32:16.365557] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.868 [2024-11-20 11:32:16.365564] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.868 [2024-11-20 11:32:16.365568] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.868 [2024-11-20 11:32:16.365572] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x72da80) on tqpair=0x6ecd90 00:19:30.868 [2024-11-20 11:32:16.365601] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.868 [2024-11-20 11:32:16.365607] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.868 [2024-11-20 11:32:16.365611] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6ecd90) 00:19:30.868 [2024-11-20 11:32:16.369647] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.868 [2024-11-20 11:32:16.369686] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x72da80, cid 3, qid 0 00:19:30.868 [2024-11-20 11:32:16.369772] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.868 [2024-11-20 11:32:16.369780] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.868 [2024-11-20 11:32:16.369785] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.868 [2024-11-20 11:32:16.369789] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x72da80) on tqpair=0x6ecd90 00:19:30.868 [2024-11-20 11:32:16.369799] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds 00:19:30.868 00:19:30.868 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:19:30.868 [2024-11-20 11:32:16.413831] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:19:30.868 [2024-11-20 11:32:16.413916] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86939 ] 00:19:31.129 [2024-11-20 11:32:16.583038] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:19:31.129 [2024-11-20 11:32:16.583113] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:19:31.129 [2024-11-20 11:32:16.583121] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:19:31.129 [2024-11-20 11:32:16.583139] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:19:31.129 [2024-11-20 11:32:16.583151] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:19:31.129 [2024-11-20 11:32:16.583484] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:19:31.129 [2024-11-20 11:32:16.583557] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x142fd90 0 00:19:31.129 [2024-11-20 11:32:16.597643] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:19:31.129 [2024-11-20 11:32:16.597672] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:19:31.129 [2024-11-20 11:32:16.597679] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:19:31.129 [2024-11-20 11:32:16.597684] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:19:31.129 [2024-11-20 11:32:16.597730] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:31.129 [2024-11-20 11:32:16.597738] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:31.129 [2024-11-20 11:32:16.597742] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x142fd90) 00:19:31.129 [2024-11-20 11:32:16.597758] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:19:31.129 [2024-11-20 11:32:16.597794] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1470600, cid 0, qid 0 00:19:31.129 [2024-11-20 11:32:16.605644] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:31.129 [2024-11-20 11:32:16.605681] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:31.129 [2024-11-20 11:32:16.605688] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:31.129 [2024-11-20 11:32:16.605693] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1470600) on tqpair=0x142fd90 00:19:31.129 [2024-11-20 11:32:16.605709] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:19:31.129 [2024-11-20 11:32:16.605719] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:19:31.129 [2024-11-20 11:32:16.605727] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:19:31.129 [2024-11-20 11:32:16.605751] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:31.129 [2024-11-20 11:32:16.605758] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:31.129 [2024-11-20 11:32:16.605762] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x142fd90) 00:19:31.129 [2024-11-20 11:32:16.605773] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.129 [2024-11-20 11:32:16.605806] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1470600, cid 0, qid 0 00:19:31.129 [2024-11-20 11:32:16.606307] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:31.129 [2024-11-20 11:32:16.606325] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:31.129 [2024-11-20 11:32:16.606330] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:31.129 [2024-11-20 11:32:16.606335] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1470600) on tqpair=0x142fd90 00:19:31.129 [2024-11-20 11:32:16.606342] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:19:31.129 [2024-11-20 11:32:16.606352] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:19:31.129 [2024-11-20 11:32:16.606362] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:31.129 [2024-11-20 11:32:16.606367] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:31.129 [2024-11-20 11:32:16.606371] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x142fd90) 00:19:31.129 [2024-11-20 11:32:16.606380] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.129 [2024-11-20 11:32:16.606405] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1470600, cid 0, qid 0 00:19:31.129 [2024-11-20 11:32:16.606789] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:31.129 [2024-11-20 11:32:16.606807] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:31.129 [2024-11-20 11:32:16.606812] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:31.129 [2024-11-20 11:32:16.606817] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1470600) on tqpair=0x142fd90 00:19:31.129 [2024-11-20 11:32:16.606825] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:19:31.129 [2024-11-20 11:32:16.606836] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:19:31.129 [2024-11-20 11:32:16.606845] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:31.129 [2024-11-20 11:32:16.606850] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:31.129 [2024-11-20 11:32:16.606855] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x142fd90) 00:19:31.129 [2024-11-20 11:32:16.606863] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.129 [2024-11-20 11:32:16.606888] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1470600, cid 0, qid 0 00:19:31.129 [2024-11-20 11:32:16.607273] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:31.129 [2024-11-20 11:32:16.607291] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:31.129 [2024-11-20 11:32:16.607297] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:31.129 [2024-11-20 11:32:16.607301] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1470600) on tqpair=0x142fd90 00:19:31.129 [2024-11-20 11:32:16.607309] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:31.129 [2024-11-20 11:32:16.607322] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:31.129 [2024-11-20 11:32:16.607328] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:31.130 [2024-11-20 11:32:16.607332] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x142fd90) 00:19:31.130 [2024-11-20 11:32:16.607341] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.130 [2024-11-20 11:32:16.607365] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1470600, cid 0, qid 0 00:19:31.130 [2024-11-20 11:32:16.607756] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:31.130 [2024-11-20 11:32:16.607773] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:31.130 [2024-11-20 11:32:16.607778] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:31.130 [2024-11-20 11:32:16.607783] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1470600) on tqpair=0x142fd90 00:19:31.130 [2024-11-20 11:32:16.607789] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:19:31.130 [2024-11-20 11:32:16.607795] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:19:31.130 [2024-11-20 11:32:16.607806] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:31.130 [2024-11-20 11:32:16.607920] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:19:31.130 [2024-11-20 11:32:16.607927] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:31.130 [2024-11-20 11:32:16.607939] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:31.130 [2024-11-20 11:32:16.607944] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:31.130 [2024-11-20 11:32:16.607948] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x142fd90) 00:19:31.130 [2024-11-20 11:32:16.607957] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.130 [2024-11-20 11:32:16.607982] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1470600, cid 0, qid 0 00:19:31.130 [2024-11-20 11:32:16.608566] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:31.130 [2024-11-20 11:32:16.608582] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:31.130 [2024-11-20 11:32:16.608587] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:31.130 [2024-11-20 11:32:16.608592] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1470600) on tqpair=0x142fd90 00:19:31.130 [2024-11-20 11:32:16.608598] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:31.130 [2024-11-20 11:32:16.608611] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:31.130 [2024-11-20 11:32:16.608629] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:31.130 [2024-11-20 11:32:16.608634] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x142fd90) 00:19:31.130 [2024-11-20 11:32:16.608642] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.130 [2024-11-20 11:32:16.608665] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1470600, cid 0, qid 0 00:19:31.130 [2024-11-20 11:32:16.609036] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:31.130 [2024-11-20 11:32:16.609052] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:31.130 [2024-11-20 11:32:16.609061] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:31.130 [2024-11-20 11:32:16.609068] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1470600) on tqpair=0x142fd90 00:19:31.130 [2024-11-20 11:32:16.609077] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:31.130 [2024-11-20 11:32:16.609086] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:19:31.130 [2024-11-20 11:32:16.609100] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:19:31.130 [2024-11-20 11:32:16.609119] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:19:31.130 [2024-11-20 11:32:16.609132] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:31.130 [2024-11-20 11:32:16.609138] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x142fd90) 00:19:31.130 [2024-11-20 11:32:16.609147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.130 [2024-11-20 11:32:16.609180] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1470600, cid 0, qid 0 00:19:31.130 [2024-11-20 11:32:16.609609] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:31.130 [2024-11-20 11:32:16.613644] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:31.130 [2024-11-20 11:32:16.613655] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:31.130 [2024-11-20 11:32:16.613660] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x142fd90): datao=0, datal=4096, cccid=0 00:19:31.130 [2024-11-20 11:32:16.613665] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1470600) on tqpair(0x142fd90): expected_datao=0, payload_size=4096 00:19:31.130 [2024-11-20 11:32:16.613671] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:31.130 [2024-11-20 11:32:16.613681] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:31.130 [2024-11-20 11:32:16.613687] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:31.130 [2024-11-20 11:32:16.613698] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:31.130 [2024-11-20 11:32:16.613706] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:31.130 [2024-11-20 11:32:16.613710] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:31.130 [2024-11-20 11:32:16.613714] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1470600) on tqpair=0x142fd90 00:19:31.130 [2024-11-20 11:32:16.613726] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:19:31.130 [2024-11-20 11:32:16.613732] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:19:31.130 [2024-11-20 11:32:16.613738] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:19:31.130 [2024-11-20 11:32:16.613743] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:19:31.130 [2024-11-20 11:32:16.613748] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:19:31.130 [2024-11-20 11:32:16.613755] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:19:31.130 [2024-11-20 11:32:16.613773] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:19:31.130 [2024-11-20 11:32:16.613784] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:31.130 [2024-11-20 11:32:16.613789] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:31.130 [2024-11-20 11:32:16.613794] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x142fd90) 00:19:31.130 [2024-11-20 11:32:16.613803] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:31.130 [2024-11-20 11:32:16.613834] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1470600, cid 0, qid 0 00:19:31.130 [2024-11-20 11:32:16.614350] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:31.130 [2024-11-20 11:32:16.614367] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:31.130 [2024-11-20 11:32:16.614372] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:31.130 [2024-11-20 11:32:16.614377] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1470600) on tqpair=0x142fd90 00:19:31.130 [2024-11-20 11:32:16.614386] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:31.130 [2024-11-20 11:32:16.614392] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:31.130 [2024-11-20 11:32:16.614396] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x142fd90) 00:19:31.130 [2024-11-20 11:32:16.614404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:31.130 [2024-11-20 11:32:16.614411] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:31.130 [2024-11-20 11:32:16.614415] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:31.130 [2024-11-20 11:32:16.614420] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x142fd90) 00:19:31.130 [2024-11-20 11:32:16.614426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:31.130 [2024-11-20 11:32:16.614433] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:31.130 [2024-11-20 11:32:16.614438] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:31.130 [2024-11-20 11:32:16.614442] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x142fd90) 00:19:31.130 [2024-11-20 11:32:16.614448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:31.130 [2024-11-20 11:32:16.614455] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:31.130 [2024-11-20 11:32:16.614459] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:31.130 [2024-11-20 11:32:16.614464] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x142fd90) 00:19:31.130 [2024-11-20 11:32:16.614470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:31.130 [2024-11-20 11:32:16.614476] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:19:31.130 [2024-11-20 11:32:16.614492] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:31.130 [2024-11-20 11:32:16.614501] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:31.130 [2024-11-20 11:32:16.614506] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x142fd90) 00:19:31.130 [2024-11-20 11:32:16.614513] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.130 [2024-11-20 11:32:16.614539] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1470600, cid 0, qid 0 00:19:31.130 [2024-11-20 11:32:16.614547] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1470780, cid 1, qid 0 00:19:31.130 [2024-11-20 11:32:16.614552] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1470900, cid 2, qid 0 00:19:31.130 [2024-11-20 11:32:16.614558] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1470a80, cid 3, qid 0 00:19:31.130 [2024-11-20 11:32:16.614564] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1470c00, cid 4, qid 0 00:19:31.130 [2024-11-20 11:32:16.615144] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:31.130 [2024-11-20 11:32:16.615161] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:31.130 [2024-11-20 11:32:16.615166] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:31.130 [2024-11-20 11:32:16.615171] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1470c00) on tqpair=0x142fd90 00:19:31.130 [2024-11-20 11:32:16.615177] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:19:31.131 [2024-11-20 11:32:16.615184] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:19:31.131 [2024-11-20 11:32:16.615195] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:19:31.131 [2024-11-20 11:32:16.615207] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:19:31.131 [2024-11-20 11:32:16.615216] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:31.131 [2024-11-20 11:32:16.615221] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:31.131 [2024-11-20 11:32:16.615226] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x142fd90) 00:19:31.131 [2024-11-20 11:32:16.615234] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:31.131 [2024-11-20 11:32:16.615258] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1470c00, cid 4, qid 0 00:19:31.131 [2024-11-20 11:32:16.615649] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:31.131 [2024-11-20 11:32:16.615666] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:31.131 [2024-11-20 11:32:16.615671] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:31.131 [2024-11-20 11:32:16.615676] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1470c00) on tqpair=0x142fd90 00:19:31.131 [2024-11-20 11:32:16.615748] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:19:31.131 [2024-11-20 11:32:16.615763] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:19:31.131 [2024-11-20 11:32:16.615774] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:31.131 [2024-11-20 11:32:16.615779] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x142fd90) 00:19:31.131 [2024-11-20 11:32:16.615788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.131 [2024-11-20 11:32:16.615812] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1470c00, cid 4, qid 0 00:19:31.131 [2024-11-20 11:32:16.616226] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:31.131 [2024-11-20 11:32:16.616243] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:31.131 [2024-11-20 11:32:16.616248] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:31.131 [2024-11-20 11:32:16.616253] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x142fd90): datao=0, datal=4096, cccid=4 00:19:31.131 [2024-11-20 11:32:16.616258] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1470c00) on tqpair(0x142fd90): expected_datao=0, payload_size=4096 00:19:31.131 [2024-11-20 11:32:16.616264] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:31.131 [2024-11-20 11:32:16.616272] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:31.131 [2024-11-20 11:32:16.616277] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:31.131 [2024-11-20 11:32:16.616286] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:31.131 [2024-11-20 11:32:16.616293] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:31.131 [2024-11-20 11:32:16.616297] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:31.131 [2024-11-20 11:32:16.616302] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1470c00) on tqpair=0x142fd90 00:19:31.131 [2024-11-20 11:32:16.616319] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:19:31.131 [2024-11-20 11:32:16.616335] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:19:31.131 [2024-11-20 11:32:16.616347] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:19:31.131 [2024-11-20 11:32:16.616357] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:31.131 [2024-11-20 11:32:16.616362] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x142fd90) 00:19:31.131 [2024-11-20 11:32:16.616370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.131 [2024-11-20 11:32:16.616394] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1470c00, cid 4, qid 0 00:19:31.131 [2024-11-20 11:32:16.616828] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:31.131 [2024-11-20 11:32:16.616845] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:31.131 [2024-11-20 11:32:16.616850] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:31.131 [2024-11-20 11:32:16.616854] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x142fd90): datao=0, datal=4096, cccid=4 00:19:31.131 [2024-11-20 11:32:16.616860] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1470c00) on tqpair(0x142fd90): expected_datao=0, payload_size=4096 00:19:31.131 [2024-11-20 11:32:16.616865] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:31.131 [2024-11-20 11:32:16.616873] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:31.131 [2024-11-20 11:32:16.616878] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:31.131 [2024-11-20 11:32:16.616887] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:31.131 [2024-11-20 11:32:16.616894] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:31.131 [2024-11-20 11:32:16.616898] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:31.131 [2024-11-20 11:32:16.616903] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1470c00) on tqpair=0x142fd90 00:19:31.131 [2024-11-20 11:32:16.616920] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:19:31.131 [2024-11-20 11:32:16.616933] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:19:31.131 [2024-11-20 11:32:16.616944] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:31.131 [2024-11-20 11:32:16.616949] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x142fd90) 00:19:31.131 [2024-11-20 11:32:16.616957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.131 [2024-11-20 11:32:16.616982] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1470c00, cid 4, qid 0 00:19:31.131 [2024-11-20 11:32:16.617345] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:31.131 [2024-11-20 11:32:16.617364] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:31.131 [2024-11-20 11:32:16.617370] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:31.131 [2024-11-20 11:32:16.617374] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x142fd90): datao=0, datal=4096, cccid=4 00:19:31.131 [2024-11-20 11:32:16.617380] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1470c00) on tqpair(0x142fd90): expected_datao=0, payload_size=4096 00:19:31.131 [2024-11-20 11:32:16.617385] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:31.131 [2024-11-20 11:32:16.617393] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:31.131 [2024-11-20 11:32:16.617398] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:31.131 [2024-11-20 11:32:16.617407] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:31.131 [2024-11-20 11:32:16.617414] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:31.131 [2024-11-20 11:32:16.617418] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:31.131 [2024-11-20 11:32:16.617423] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1470c00) on tqpair=0x142fd90 00:19:31.131 [2024-11-20 11:32:16.617434] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:19:31.131 [2024-11-20 11:32:16.617444] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:19:31.131 [2024-11-20 11:32:16.617456] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:19:31.131 [2024-11-20 11:32:16.617464] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:19:31.131 [2024-11-20 11:32:16.617471] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:19:31.131 [2024-11-20 11:32:16.617477] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:19:31.131 [2024-11-20 11:32:16.617483] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:19:31.131 [2024-11-20 11:32:16.617489] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:19:31.131 [2024-11-20 11:32:16.617495] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:19:31.131 [2024-11-20 11:32:16.617515] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:31.131 [2024-11-20 11:32:16.617521] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x142fd90) 00:19:31.131 [2024-11-20 11:32:16.617529] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.131 [2024-11-20 11:32:16.617538] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:31.131 [2024-11-20 11:32:16.617543] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:31.131 [2024-11-20 11:32:16.617547] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x142fd90) 00:19:31.131 [2024-11-20 11:32:16.617554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:19:31.131 [2024-11-20 11:32:16.617597] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1470c00, cid 4, qid 0 00:19:31.131 [2024-11-20 11:32:16.617606] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1470d80, cid 5, qid 0 00:19:31.131 [2024-11-20 11:32:16.621643] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:31.131 [2024-11-20 11:32:16.621667] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:31.131 [2024-11-20 11:32:16.621673] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:31.131 [2024-11-20 11:32:16.621678] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1470c00) on tqpair=0x142fd90 00:19:31.131 [2024-11-20 11:32:16.621686] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:31.131 [2024-11-20 11:32:16.621693] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:31.131 [2024-11-20 11:32:16.621697] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:31.131 [2024-11-20 11:32:16.621702] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1470d80) on tqpair=0x142fd90 00:19:31.131 [2024-11-20 11:32:16.621715] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:31.131 [2024-11-20 11:32:16.621721] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x142fd90) 00:19:31.131 [2024-11-20 11:32:16.621730] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.131 [2024-11-20 11:32:16.621759] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1470d80, cid 5, qid 0 00:19:31.131 [2024-11-20 11:32:16.622158] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:31.132 [2024-11-20 11:32:16.622176] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:31.132 [2024-11-20 11:32:16.622181] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:31.132 [2024-11-20 11:32:16.622186] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1470d80) on tqpair=0x142fd90 00:19:31.132 [2024-11-20 11:32:16.622198] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:31.132 [2024-11-20 11:32:16.622204] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x142fd90) 00:19:31.132 [2024-11-20 11:32:16.622212] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.132 [2024-11-20 11:32:16.622235] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1470d80, cid 5, qid 0 00:19:31.132 [2024-11-20 11:32:16.622666] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:31.132 [2024-11-20 11:32:16.622682] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:31.132 [2024-11-20 11:32:16.622687] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:31.132 [2024-11-20 11:32:16.622692] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1470d80) on tqpair=0x142fd90 00:19:31.132 [2024-11-20 11:32:16.622704] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:31.132 [2024-11-20 11:32:16.622709] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x142fd90) 00:19:31.132 [2024-11-20 11:32:16.622718] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.132 [2024-11-20 11:32:16.622739] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1470d80, cid 5, qid 0 00:19:31.132 [2024-11-20 11:32:16.623156] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:31.132 [2024-11-20 11:32:16.623171] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:31.132 [2024-11-20 11:32:16.623176] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:31.132 [2024-11-20 11:32:16.623181] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1470d80) on tqpair=0x142fd90 00:19:31.132 [2024-11-20 11:32:16.623205] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:31.132 [2024-11-20 11:32:16.623211] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x142fd90) 00:19:31.132 [2024-11-20 11:32:16.623220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.132 [2024-11-20 11:32:16.623229] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:31.132 [2024-11-20 11:32:16.623234] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x142fd90) 00:19:31.132 [2024-11-20 11:32:16.623241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.132 [2024-11-20 11:32:16.623249] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:31.132 [2024-11-20 11:32:16.623254] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x142fd90) 00:19:31.132 [2024-11-20 11:32:16.623261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.132 [2024-11-20 11:32:16.623270] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:31.132 [2024-11-20 11:32:16.623275] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x142fd90) 00:19:31.132 [2024-11-20 11:32:16.623282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.132 [2024-11-20 11:32:16.623306] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1470d80, cid 5, qid 0 00:19:31.132 [2024-11-20 11:32:16.623314] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1470c00, cid 4, qid 0 00:19:31.132 [2024-11-20 11:32:16.623319] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1470f00, cid 6, qid 0 00:19:31.132 [2024-11-20 11:32:16.623325] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1471080, cid 7, qid 0 00:19:31.132 [2024-11-20 11:32:16.623939] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:31.132 [2024-11-20 11:32:16.623955] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:31.132 [2024-11-20 11:32:16.623960] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:31.132 [2024-11-20 11:32:16.623965] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x142fd90): datao=0, datal=8192, cccid=5 00:19:31.132 [2024-11-20 11:32:16.623971] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1470d80) on tqpair(0x142fd90): expected_datao=0, payload_size=8192 00:19:31.132 [2024-11-20 11:32:16.623976] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:31.132 [2024-11-20 11:32:16.623995] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:31.132 [2024-11-20 11:32:16.624001] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:31.132 [2024-11-20 11:32:16.624008] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:31.132 [2024-11-20 11:32:16.624014] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:31.132 [2024-11-20 11:32:16.624018] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:31.132 [2024-11-20 11:32:16.624023] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x142fd90): datao=0, datal=512, cccid=4 00:19:31.132 [2024-11-20 11:32:16.624028] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1470c00) on tqpair(0x142fd90): expected_datao=0, payload_size=512 00:19:31.132 [2024-11-20 11:32:16.624033] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:31.132 [2024-11-20 11:32:16.624040] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:31.132 [2024-11-20 11:32:16.624044] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:31.132 [2024-11-20 11:32:16.624050] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:31.132 [2024-11-20 11:32:16.624057] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:31.132 [2024-11-20 11:32:16.624061] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:31.132 [2024-11-20 11:32:16.624065] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x142fd90): datao=0, datal=512, cccid=6 00:19:31.132 [2024-11-20 11:32:16.624070] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1470f00) on tqpair(0x142fd90): expected_datao=0, payload_size=512 00:19:31.132 [2024-11-20 11:32:16.624075] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:31.132 [2024-11-20 11:32:16.624082] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:31.132 [2024-11-20 11:32:16.624086] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:31.132 [2024-11-20 11:32:16.624092] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:31.132 [2024-11-20 11:32:16.624099] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:31.132 [2024-11-20 11:32:16.624103] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:31.132 [2024-11-20 11:32:16.624107] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x142fd90): datao=0, datal=4096, cccid=7 00:19:31.132 [2024-11-20 11:32:16.624112] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1471080) on tqpair(0x142fd90): expected_datao=0, payload_size=4096 00:19:31.132 [2024-11-20 11:32:16.624117] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:31.132 [2024-11-20 11:32:16.624125] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:31.132 [2024-11-20 11:32:16.624129] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:31.132 [2024-11-20 11:32:16.624138] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:31.132 [2024-11-20 11:32:16.624145] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:31.132 [2024-11-20 11:32:16.624152] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:31.132 [2024-11-20 11:32:16.624156] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1470d80) on tqpair=0x142fd90 00:19:31.132 [2024-11-20 11:32:16.624174] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:31.132 [2024-11-20 11:32:16.624181] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:31.132 [2024-11-20 11:32:16.624185] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:31.132 [2024-11-20 11:32:16.624190] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1470c00) on tqpair=0x142fd90 00:19:31.132 [2024-11-20 11:32:16.624203] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:31.132 [2024-11-20 11:32:16.624210] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:31.132 [2024-11-20 11:32:16.624214] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:31.132 [2024-11-20 11:32:16.624218] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1470f00) on tqpair=0x142fd90 00:19:31.132 [2024-11-20 11:32:16.624226] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:31.132 [2024-11-20 11:32:16.624233] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:31.132 [2024-11-20 11:32:16.624237] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:31.132 [2024-11-20 11:32:16.624241] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1471080) on tqpair=0x142fd90 00:19:31.132 ===================================================== 00:19:31.132 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:19:31.132 ===================================================== 00:19:31.132 Controller Capabilities/Features 00:19:31.132 ================================ 00:19:31.132 Vendor ID: 8086 00:19:31.132 Subsystem Vendor ID: 8086 00:19:31.132 Serial Number: SPDK00000000000001 00:19:31.132 Model Number: SPDK bdev Controller 00:19:31.132 Firmware Version: 25.01 00:19:31.132 Recommended Arb Burst: 6 00:19:31.132 IEEE OUI Identifier: e4 d2 5c 00:19:31.132 Multi-path I/O 00:19:31.132 May have multiple subsystem ports: Yes 00:19:31.132 May have multiple controllers: Yes 00:19:31.132 Associated with SR-IOV VF: No 00:19:31.132 Max Data Transfer Size: 131072 00:19:31.132 Max Number of Namespaces: 32 00:19:31.132 Max Number of I/O Queues: 127 00:19:31.132 NVMe Specification Version (VS): 1.3 00:19:31.132 NVMe Specification Version (Identify): 1.3 00:19:31.132 Maximum Queue Entries: 128 00:19:31.132 Contiguous Queues Required: Yes 00:19:31.132 Arbitration Mechanisms Supported 00:19:31.132 Weighted Round Robin: Not Supported 00:19:31.132 Vendor Specific: Not Supported 00:19:31.132 Reset Timeout: 15000 ms 00:19:31.132 Doorbell Stride: 4 bytes 00:19:31.132 NVM Subsystem Reset: Not Supported 00:19:31.132 Command Sets Supported 00:19:31.132 NVM Command Set: Supported 00:19:31.132 Boot Partition: Not Supported 00:19:31.132 Memory Page Size Minimum: 4096 bytes 00:19:31.132 Memory Page Size Maximum: 4096 bytes 00:19:31.132 Persistent Memory Region: Not Supported 00:19:31.133 Optional Asynchronous Events Supported 00:19:31.133 Namespace Attribute Notices: Supported 00:19:31.133 Firmware Activation Notices: Not Supported 00:19:31.133 ANA Change Notices: Not Supported 00:19:31.133 PLE Aggregate Log Change Notices: Not Supported 00:19:31.133 LBA Status Info Alert Notices: Not Supported 00:19:31.133 EGE Aggregate Log Change Notices: Not Supported 00:19:31.133 Normal NVM Subsystem Shutdown event: Not Supported 00:19:31.133 Zone Descriptor Change Notices: Not Supported 00:19:31.133 Discovery Log Change Notices: Not Supported 00:19:31.133 Controller Attributes 00:19:31.133 128-bit Host Identifier: Supported 00:19:31.133 Non-Operational Permissive Mode: Not Supported 00:19:31.133 NVM Sets: Not Supported 00:19:31.133 Read Recovery Levels: Not Supported 00:19:31.133 Endurance Groups: Not Supported 00:19:31.133 Predictable Latency Mode: Not Supported 00:19:31.133 Traffic Based Keep ALive: Not Supported 00:19:31.133 Namespace Granularity: Not Supported 00:19:31.133 SQ Associations: Not Supported 00:19:31.133 UUID List: Not Supported 00:19:31.133 Multi-Domain Subsystem: Not Supported 00:19:31.133 Fixed Capacity Management: Not Supported 00:19:31.133 Variable Capacity Management: Not Supported 00:19:31.133 Delete Endurance Group: Not Supported 00:19:31.133 Delete NVM Set: Not Supported 00:19:31.133 Extended LBA Formats Supported: Not Supported 00:19:31.133 Flexible Data Placement Supported: Not Supported 00:19:31.133 00:19:31.133 Controller Memory Buffer Support 00:19:31.133 ================================ 00:19:31.133 Supported: No 00:19:31.133 00:19:31.133 Persistent Memory Region Support 00:19:31.133 ================================ 00:19:31.133 Supported: No 00:19:31.133 00:19:31.133 Admin Command Set Attributes 00:19:31.133 ============================ 00:19:31.133 Security Send/Receive: Not Supported 00:19:31.133 Format NVM: Not Supported 00:19:31.133 Firmware Activate/Download: Not Supported 00:19:31.133 Namespace Management: Not Supported 00:19:31.133 Device Self-Test: Not Supported 00:19:31.133 Directives: Not Supported 00:19:31.133 NVMe-MI: Not Supported 00:19:31.133 Virtualization Management: Not Supported 00:19:31.133 Doorbell Buffer Config: Not Supported 00:19:31.133 Get LBA Status Capability: Not Supported 00:19:31.133 Command & Feature Lockdown Capability: Not Supported 00:19:31.133 Abort Command Limit: 4 00:19:31.133 Async Event Request Limit: 4 00:19:31.133 Number of Firmware Slots: N/A 00:19:31.133 Firmware Slot 1 Read-Only: N/A 00:19:31.133 Firmware Activation Without Reset: N/A 00:19:31.133 Multiple Update Detection Support: N/A 00:19:31.133 Firmware Update Granularity: No Information Provided 00:19:31.133 Per-Namespace SMART Log: No 00:19:31.133 Asymmetric Namespace Access Log Page: Not Supported 00:19:31.133 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:19:31.133 Command Effects Log Page: Supported 00:19:31.133 Get Log Page Extended Data: Supported 00:19:31.133 Telemetry Log Pages: Not Supported 00:19:31.133 Persistent Event Log Pages: Not Supported 00:19:31.133 Supported Log Pages Log Page: May Support 00:19:31.133 Commands Supported & Effects Log Page: Not Supported 00:19:31.133 Feature Identifiers & Effects Log Page:May Support 00:19:31.133 NVMe-MI Commands & Effects Log Page: May Support 00:19:31.133 Data Area 4 for Telemetry Log: Not Supported 00:19:31.133 Error Log Page Entries Supported: 128 00:19:31.133 Keep Alive: Supported 00:19:31.133 Keep Alive Granularity: 10000 ms 00:19:31.133 00:19:31.133 NVM Command Set Attributes 00:19:31.133 ========================== 00:19:31.133 Submission Queue Entry Size 00:19:31.133 Max: 64 00:19:31.133 Min: 64 00:19:31.133 Completion Queue Entry Size 00:19:31.133 Max: 16 00:19:31.133 Min: 16 00:19:31.133 Number of Namespaces: 32 00:19:31.133 Compare Command: Supported 00:19:31.133 Write Uncorrectable Command: Not Supported 00:19:31.133 Dataset Management Command: Supported 00:19:31.133 Write Zeroes Command: Supported 00:19:31.133 Set Features Save Field: Not Supported 00:19:31.133 Reservations: Supported 00:19:31.133 Timestamp: Not Supported 00:19:31.133 Copy: Supported 00:19:31.133 Volatile Write Cache: Present 00:19:31.133 Atomic Write Unit (Normal): 1 00:19:31.133 Atomic Write Unit (PFail): 1 00:19:31.133 Atomic Compare & Write Unit: 1 00:19:31.133 Fused Compare & Write: Supported 00:19:31.133 Scatter-Gather List 00:19:31.133 SGL Command Set: Supported 00:19:31.133 SGL Keyed: Supported 00:19:31.133 SGL Bit Bucket Descriptor: Not Supported 00:19:31.133 SGL Metadata Pointer: Not Supported 00:19:31.133 Oversized SGL: Not Supported 00:19:31.133 SGL Metadata Address: Not Supported 00:19:31.133 SGL Offset: Supported 00:19:31.133 Transport SGL Data Block: Not Supported 00:19:31.133 Replay Protected Memory Block: Not Supported 00:19:31.133 00:19:31.133 Firmware Slot Information 00:19:31.133 ========================= 00:19:31.133 Active slot: 1 00:19:31.133 Slot 1 Firmware Revision: 25.01 00:19:31.133 00:19:31.133 00:19:31.133 Commands Supported and Effects 00:19:31.133 ============================== 00:19:31.133 Admin Commands 00:19:31.133 -------------- 00:19:31.133 Get Log Page (02h): Supported 00:19:31.133 Identify (06h): Supported 00:19:31.133 Abort (08h): Supported 00:19:31.133 Set Features (09h): Supported 00:19:31.133 Get Features (0Ah): Supported 00:19:31.133 Asynchronous Event Request (0Ch): Supported 00:19:31.133 Keep Alive (18h): Supported 00:19:31.133 I/O Commands 00:19:31.133 ------------ 00:19:31.133 Flush (00h): Supported LBA-Change 00:19:31.133 Write (01h): Supported LBA-Change 00:19:31.133 Read (02h): Supported 00:19:31.133 Compare (05h): Supported 00:19:31.133 Write Zeroes (08h): Supported LBA-Change 00:19:31.133 Dataset Management (09h): Supported LBA-Change 00:19:31.133 Copy (19h): Supported LBA-Change 00:19:31.133 00:19:31.133 Error Log 00:19:31.133 ========= 00:19:31.133 00:19:31.133 Arbitration 00:19:31.133 =========== 00:19:31.133 Arbitration Burst: 1 00:19:31.133 00:19:31.133 Power Management 00:19:31.133 ================ 00:19:31.133 Number of Power States: 1 00:19:31.133 Current Power State: Power State #0 00:19:31.133 Power State #0: 00:19:31.133 Max Power: 0.00 W 00:19:31.133 Non-Operational State: Operational 00:19:31.133 Entry Latency: Not Reported 00:19:31.133 Exit Latency: Not Reported 00:19:31.133 Relative Read Throughput: 0 00:19:31.133 Relative Read Latency: 0 00:19:31.133 Relative Write Throughput: 0 00:19:31.133 Relative Write Latency: 0 00:19:31.133 Idle Power: Not Reported 00:19:31.133 Active Power: Not Reported 00:19:31.133 Non-Operational Permissive Mode: Not Supported 00:19:31.133 00:19:31.133 Health Information 00:19:31.133 ================== 00:19:31.133 Critical Warnings: 00:19:31.133 Available Spare Space: OK 00:19:31.133 Temperature: OK 00:19:31.133 Device Reliability: OK 00:19:31.133 Read Only: No 00:19:31.133 Volatile Memory Backup: OK 00:19:31.133 Current Temperature: 0 Kelvin (-273 Celsius) 00:19:31.133 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:19:31.133 Available Spare: 0% 00:19:31.133 Available Spare Threshold: 0% 00:19:31.133 Life Percentage Used:[2024-11-20 11:32:16.624349] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:31.133 [2024-11-20 11:32:16.624356] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x142fd90) 00:19:31.133 [2024-11-20 11:32:16.624365] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.133 [2024-11-20 11:32:16.624392] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1471080, cid 7, qid 0 00:19:31.133 [2024-11-20 11:32:16.624878] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:31.133 [2024-11-20 11:32:16.624895] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:31.133 [2024-11-20 11:32:16.624900] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:31.133 [2024-11-20 11:32:16.624905] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1471080) on tqpair=0x142fd90 00:19:31.133 [2024-11-20 11:32:16.624948] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:19:31.133 [2024-11-20 11:32:16.624961] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1470600) on tqpair=0x142fd90 00:19:31.133 [2024-11-20 11:32:16.624969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.133 [2024-11-20 11:32:16.624975] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1470780) on tqpair=0x142fd90 00:19:31.133 [2024-11-20 11:32:16.624980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.134 [2024-11-20 11:32:16.624986] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1470900) on tqpair=0x142fd90 00:19:31.134 [2024-11-20 11:32:16.624991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.134 [2024-11-20 11:32:16.624997] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1470a80) on tqpair=0x142fd90 00:19:31.134 [2024-11-20 11:32:16.625002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.134 [2024-11-20 11:32:16.625013] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:31.134 [2024-11-20 11:32:16.625018] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:31.134 [2024-11-20 11:32:16.625023] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x142fd90) 00:19:31.134 [2024-11-20 11:32:16.625031] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.134 [2024-11-20 11:32:16.625058] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1470a80, cid 3, qid 0 00:19:31.134 [2024-11-20 11:32:16.625456] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:31.134 [2024-11-20 11:32:16.625472] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:31.134 [2024-11-20 11:32:16.625478] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:31.134 [2024-11-20 11:32:16.625482] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1470a80) on tqpair=0x142fd90 00:19:31.134 [2024-11-20 11:32:16.625492] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:31.134 [2024-11-20 11:32:16.625497] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:31.134 [2024-11-20 11:32:16.625501] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x142fd90) 00:19:31.134 [2024-11-20 11:32:16.625510] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.134 [2024-11-20 11:32:16.625535] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1470a80, cid 3, qid 0 00:19:31.134 [2024-11-20 11:32:16.629637] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:31.134 [2024-11-20 11:32:16.629661] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:31.134 [2024-11-20 11:32:16.629667] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:31.134 [2024-11-20 11:32:16.629672] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1470a80) on tqpair=0x142fd90 00:19:31.134 [2024-11-20 11:32:16.629678] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:19:31.134 [2024-11-20 11:32:16.629684] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:19:31.134 [2024-11-20 11:32:16.629699] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:31.134 [2024-11-20 11:32:16.629706] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:31.134 [2024-11-20 11:32:16.629710] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x142fd90) 00:19:31.134 [2024-11-20 11:32:16.629720] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.134 [2024-11-20 11:32:16.629749] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1470a80, cid 3, qid 0 00:19:31.134 [2024-11-20 11:32:16.630177] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:31.134 [2024-11-20 11:32:16.630192] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:31.134 [2024-11-20 11:32:16.630197] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:31.134 [2024-11-20 11:32:16.630202] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1470a80) on tqpair=0x142fd90 00:19:31.134 [2024-11-20 11:32:16.630213] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 0 milliseconds 00:19:31.134 0% 00:19:31.134 Data Units Read: 0 00:19:31.134 Data Units Written: 0 00:19:31.134 Host Read Commands: 0 00:19:31.134 Host Write Commands: 0 00:19:31.134 Controller Busy Time: 0 minutes 00:19:31.134 Power Cycles: 0 00:19:31.134 Power On Hours: 0 hours 00:19:31.134 Unsafe Shutdowns: 0 00:19:31.134 Unrecoverable Media Errors: 0 00:19:31.134 Lifetime Error Log Entries: 0 00:19:31.134 Warning Temperature Time: 0 minutes 00:19:31.134 Critical Temperature Time: 0 minutes 00:19:31.134 00:19:31.134 Number of Queues 00:19:31.134 ================ 00:19:31.134 Number of I/O Submission Queues: 127 00:19:31.134 Number of I/O Completion Queues: 127 00:19:31.134 00:19:31.134 Active Namespaces 00:19:31.134 ================= 00:19:31.134 Namespace ID:1 00:19:31.134 Error Recovery Timeout: Unlimited 00:19:31.134 Command Set Identifier: NVM (00h) 00:19:31.134 Deallocate: Supported 00:19:31.134 Deallocated/Unwritten Error: Not Supported 00:19:31.134 Deallocated Read Value: Unknown 00:19:31.134 Deallocate in Write Zeroes: Not Supported 00:19:31.134 Deallocated Guard Field: 0xFFFF 00:19:31.134 Flush: Supported 00:19:31.134 Reservation: Supported 00:19:31.134 Namespace Sharing Capabilities: Multiple Controllers 00:19:31.134 Size (in LBAs): 131072 (0GiB) 00:19:31.134 Capacity (in LBAs): 131072 (0GiB) 00:19:31.134 Utilization (in LBAs): 131072 (0GiB) 00:19:31.134 NGUID: ABCDEF0123456789ABCDEF0123456789 00:19:31.134 EUI64: ABCDEF0123456789 00:19:31.134 UUID: 06c3efbb-fea1-4e70-a700-452ed13154a1 00:19:31.134 Thin Provisioning: Not Supported 00:19:31.134 Per-NS Atomic Units: Yes 00:19:31.134 Atomic Boundary Size (Normal): 0 00:19:31.134 Atomic Boundary Size (PFail): 0 00:19:31.134 Atomic Boundary Offset: 0 00:19:31.134 Maximum Single Source Range Length: 65535 00:19:31.134 Maximum Copy Length: 65535 00:19:31.134 Maximum Source Range Count: 1 00:19:31.134 NGUID/EUI64 Never Reused: No 00:19:31.134 Namespace Write Protected: No 00:19:31.134 Number of LBA Formats: 1 00:19:31.134 Current LBA Format: LBA Format #00 00:19:31.134 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:31.134 00:19:31.134 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:19:31.134 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:31.134 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.134 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:31.134 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.134 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:19:31.134 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:19:31.134 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:31.134 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:19:31.134 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:31.134 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:19:31.134 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:31.134 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:31.393 rmmod nvme_tcp 00:19:31.393 rmmod nvme_fabrics 00:19:31.393 rmmod nvme_keyring 00:19:31.393 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:31.393 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:19:31.393 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:19:31.393 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 86896 ']' 00:19:31.393 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 86896 00:19:31.393 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 86896 ']' 00:19:31.393 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 86896 00:19:31.393 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:19:31.393 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:31.393 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86896 00:19:31.393 killing process with pid 86896 00:19:31.393 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:31.393 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:31.393 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86896' 00:19:31.393 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 86896 00:19:31.393 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 86896 00:19:31.393 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:31.393 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:31.393 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:31.393 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:19:31.393 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:19:31.393 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:19:31.393 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:31.393 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:31.393 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:31.393 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:31.652 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:31.652 11:32:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:31.652 11:32:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:31.652 11:32:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:31.652 11:32:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:31.652 11:32:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:31.652 11:32:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:31.652 11:32:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:31.652 11:32:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:31.652 11:32:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:31.652 11:32:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:31.652 11:32:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:31.652 11:32:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:31.652 11:32:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:31.652 11:32:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:31.652 11:32:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:31.652 11:32:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:19:31.652 00:19:31.652 real 0m2.380s 00:19:31.652 user 0m4.839s 00:19:31.652 sys 0m0.698s 00:19:31.652 11:32:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:31.652 ************************************ 00:19:31.652 11:32:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:31.652 END TEST nvmf_identify 00:19:31.652 ************************************ 00:19:31.913 11:32:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:19:31.913 11:32:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:31.913 11:32:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:31.913 11:32:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:31.913 ************************************ 00:19:31.913 START TEST nvmf_perf 00:19:31.913 ************************************ 00:19:31.913 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:19:31.913 * Looking for test storage... 00:19:31.913 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:31.913 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:31.913 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:19:31.913 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:31.913 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:31.913 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:31.913 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:31.913 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:31.913 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:19:31.913 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:19:31.913 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:19:31.913 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:19:31.913 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:19:31.913 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:19:31.913 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:19:31.913 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:31.913 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:19:31.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:19:31.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:31.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:31.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:19:31.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:19:31.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:31.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:19:31.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:19:31.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:19:31.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:19:31.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:31.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:19:31.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:19:31.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:31.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:31.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:19:31.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:31.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:31.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:31.914 --rc genhtml_branch_coverage=1 00:19:31.914 --rc genhtml_function_coverage=1 00:19:31.914 --rc genhtml_legend=1 00:19:31.914 --rc geninfo_all_blocks=1 00:19:31.914 --rc geninfo_unexecuted_blocks=1 00:19:31.914 00:19:31.914 ' 00:19:31.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:31.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:31.914 --rc genhtml_branch_coverage=1 00:19:31.914 --rc genhtml_function_coverage=1 00:19:31.914 --rc genhtml_legend=1 00:19:31.914 --rc geninfo_all_blocks=1 00:19:31.914 --rc geninfo_unexecuted_blocks=1 00:19:31.914 00:19:31.914 ' 00:19:31.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:31.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:31.914 --rc genhtml_branch_coverage=1 00:19:31.914 --rc genhtml_function_coverage=1 00:19:31.914 --rc genhtml_legend=1 00:19:31.914 --rc geninfo_all_blocks=1 00:19:31.914 --rc geninfo_unexecuted_blocks=1 00:19:31.914 00:19:31.914 ' 00:19:31.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:31.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:31.914 --rc genhtml_branch_coverage=1 00:19:31.914 --rc genhtml_function_coverage=1 00:19:31.914 --rc genhtml_legend=1 00:19:31.914 --rc geninfo_all_blocks=1 00:19:31.914 --rc geninfo_unexecuted_blocks=1 00:19:31.914 00:19:31.914 ' 00:19:31.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:31.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:19:31.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:31.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:31.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:31.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:31.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:31.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:31.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:31.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:31.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:31.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:31.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:19:31.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=806d442c-08ba-4719-b76d-33169283459a 00:19:31.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:31.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:31.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:31.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:31.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:31.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:19:31.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:31.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:31.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:31.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:19:31.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:19:31.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:31.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:31.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:31.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:31.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:31.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:31.914 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:31.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:31.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:31.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:31.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:31.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:31.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:31.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:19:31.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:31.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:31.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:31.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:31.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:31.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:31.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:31.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:32.185 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:32.186 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:32.186 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:32.186 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:32.186 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:32.186 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:32.186 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:32.186 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:32.186 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:32.186 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:32.186 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:32.186 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:32.186 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:32.186 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:32.186 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:32.186 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:32.186 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:32.186 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:32.186 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:32.186 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:32.186 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:32.186 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:32.186 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:32.186 Cannot find device "nvmf_init_br" 00:19:32.186 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:19:32.186 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:32.186 Cannot find device "nvmf_init_br2" 00:19:32.186 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:19:32.186 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:32.186 Cannot find device "nvmf_tgt_br" 00:19:32.186 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:19:32.186 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:32.186 Cannot find device "nvmf_tgt_br2" 00:19:32.186 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:19:32.186 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:32.186 Cannot find device "nvmf_init_br" 00:19:32.186 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:19:32.186 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:32.186 Cannot find device "nvmf_init_br2" 00:19:32.186 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:19:32.186 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:32.186 Cannot find device "nvmf_tgt_br" 00:19:32.186 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:19:32.186 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:32.186 Cannot find device "nvmf_tgt_br2" 00:19:32.186 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:19:32.186 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:32.186 Cannot find device "nvmf_br" 00:19:32.186 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:19:32.186 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:32.186 Cannot find device "nvmf_init_if" 00:19:32.186 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:19:32.186 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:32.186 Cannot find device "nvmf_init_if2" 00:19:32.186 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:19:32.186 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:32.186 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:32.186 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:19:32.186 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:32.186 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:32.186 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:19:32.186 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:32.186 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:32.186 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:32.186 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:32.186 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:32.186 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:32.186 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:32.186 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:32.186 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:32.186 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:32.186 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:32.186 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:32.186 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:32.186 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:32.186 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:32.186 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:32.186 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:32.445 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:32.446 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:32.446 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:32.446 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:32.446 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:32.446 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:32.446 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:32.446 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:32.446 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:32.446 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:32.446 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:32.446 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:32.446 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:32.446 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:32.446 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:32.446 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:32.446 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:32.446 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:19:32.446 00:19:32.446 --- 10.0.0.3 ping statistics --- 00:19:32.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:32.446 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:19:32.446 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:32.446 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:32.446 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms 00:19:32.446 00:19:32.446 --- 10.0.0.4 ping statistics --- 00:19:32.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:32.446 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:19:32.446 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:32.446 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:32.446 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:19:32.446 00:19:32.446 --- 10.0.0.1 ping statistics --- 00:19:32.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:32.446 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:19:32.446 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:32.446 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:32.446 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:19:32.446 00:19:32.446 --- 10.0.0.2 ping statistics --- 00:19:32.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:32.446 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:19:32.446 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:32.446 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@461 -- # return 0 00:19:32.446 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:32.446 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:32.446 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:32.446 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:32.446 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:32.446 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:32.446 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:32.446 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:19:32.446 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:32.446 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:32.446 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:19:32.446 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=87163 00:19:32.446 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:32.446 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 87163 00:19:32.446 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 87163 ']' 00:19:32.446 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:32.446 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:32.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:32.446 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:32.446 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:32.446 11:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:19:32.446 [2024-11-20 11:32:17.990260] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:19:32.446 [2024-11-20 11:32:17.990360] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:32.724 [2024-11-20 11:32:18.147976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:32.725 [2024-11-20 11:32:18.188422] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:32.725 [2024-11-20 11:32:18.188484] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:32.725 [2024-11-20 11:32:18.188500] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:32.725 [2024-11-20 11:32:18.188510] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:32.725 [2024-11-20 11:32:18.188519] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:32.725 [2024-11-20 11:32:18.189376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:32.725 [2024-11-20 11:32:18.190142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:32.725 [2024-11-20 11:32:18.190248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:32.725 [2024-11-20 11:32:18.190417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:33.662 11:32:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:33.662 11:32:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:19:33.662 11:32:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:33.662 11:32:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:33.662 11:32:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:19:33.662 11:32:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:33.662 11:32:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:19:33.662 11:32:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:19:34.227 11:32:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:19:34.227 11:32:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:19:34.486 11:32:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:19:34.486 11:32:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:34.745 11:32:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:19:34.745 11:32:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:19:34.745 11:32:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:19:34.745 11:32:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:19:34.745 11:32:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:35.001 [2024-11-20 11:32:20.442576] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:35.001 11:32:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:35.259 11:32:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:19:35.259 11:32:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:35.517 11:32:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:19:35.517 11:32:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:19:36.084 11:32:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:36.342 [2024-11-20 11:32:21.680309] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:36.342 11:32:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:19:36.601 11:32:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:19:36.601 11:32:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:19:36.601 11:32:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:19:36.601 11:32:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:19:37.537 Initializing NVMe Controllers 00:19:37.537 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:19:37.537 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:19:37.537 Initialization complete. Launching workers. 00:19:37.537 ======================================================== 00:19:37.537 Latency(us) 00:19:37.537 Device Information : IOPS MiB/s Average min max 00:19:37.537 PCIE (0000:00:10.0) NSID 1 from core 0: 24411.56 95.36 1310.73 315.79 6069.09 00:19:37.537 ======================================================== 00:19:37.537 Total : 24411.56 95.36 1310.73 315.79 6069.09 00:19:37.537 00:19:37.537 11:32:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:19:38.915 Initializing NVMe Controllers 00:19:38.915 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:19:38.915 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:38.915 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:19:38.915 Initialization complete. Launching workers. 00:19:38.915 ======================================================== 00:19:38.915 Latency(us) 00:19:38.915 Device Information : IOPS MiB/s Average min max 00:19:38.915 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3517.84 13.74 283.92 116.61 4279.85 00:19:38.915 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.99 0.48 8128.31 6959.59 12028.52 00:19:38.915 ======================================================== 00:19:38.915 Total : 3641.84 14.23 551.00 116.61 12028.52 00:19:38.915 00:19:38.915 11:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:19:40.292 Initializing NVMe Controllers 00:19:40.292 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:19:40.292 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:40.292 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:19:40.292 Initialization complete. Launching workers. 00:19:40.292 ======================================================== 00:19:40.292 Latency(us) 00:19:40.292 Device Information : IOPS MiB/s Average min max 00:19:40.292 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8550.13 33.40 3783.00 827.74 42069.96 00:19:40.292 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2664.17 10.41 12105.11 6793.52 24142.61 00:19:40.292 ======================================================== 00:19:40.292 Total : 11214.30 43.81 5760.08 827.74 42069.96 00:19:40.292 00:19:40.292 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:19:40.292 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:19:42.825 Initializing NVMe Controllers 00:19:42.825 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:19:42.825 Controller IO queue size 128, less than required. 00:19:42.825 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:42.825 Controller IO queue size 128, less than required. 00:19:42.825 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:42.825 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:42.825 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:19:42.825 Initialization complete. Launching workers. 00:19:42.825 ======================================================== 00:19:42.825 Latency(us) 00:19:42.826 Device Information : IOPS MiB/s Average min max 00:19:42.826 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1676.79 419.20 77210.67 39117.68 124580.31 00:19:42.826 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 603.74 150.94 220990.66 88523.11 375885.53 00:19:42.826 ======================================================== 00:19:42.826 Total : 2280.53 570.13 115274.71 39117.68 375885.53 00:19:42.826 00:19:42.826 11:32:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:19:43.084 Initializing NVMe Controllers 00:19:43.084 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:19:43.084 Controller IO queue size 128, less than required. 00:19:43.084 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:43.084 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:19:43.084 Controller IO queue size 128, less than required. 00:19:43.084 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:43.084 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:19:43.084 WARNING: Some requested NVMe devices were skipped 00:19:43.084 No valid NVMe controllers or AIO or URING devices found 00:19:43.342 11:32:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:19:45.876 Initializing NVMe Controllers 00:19:45.876 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:19:45.876 Controller IO queue size 128, less than required. 00:19:45.876 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:45.876 Controller IO queue size 128, less than required. 00:19:45.876 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:45.876 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:45.876 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:19:45.876 Initialization complete. Launching workers. 00:19:45.876 00:19:45.876 ==================== 00:19:45.876 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:19:45.876 TCP transport: 00:19:45.876 polls: 8772 00:19:45.876 idle_polls: 4163 00:19:45.876 sock_completions: 4609 00:19:45.876 nvme_completions: 4745 00:19:45.876 submitted_requests: 7032 00:19:45.876 queued_requests: 1 00:19:45.876 00:19:45.876 ==================== 00:19:45.876 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:19:45.876 TCP transport: 00:19:45.876 polls: 8863 00:19:45.876 idle_polls: 5408 00:19:45.876 sock_completions: 3455 00:19:45.876 nvme_completions: 6741 00:19:45.876 submitted_requests: 10164 00:19:45.876 queued_requests: 1 00:19:45.876 ======================================================== 00:19:45.876 Latency(us) 00:19:45.876 Device Information : IOPS MiB/s Average min max 00:19:45.876 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1184.32 296.08 111007.69 68579.18 187368.15 00:19:45.876 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1682.61 420.65 76941.05 27072.46 131533.49 00:19:45.876 ======================================================== 00:19:45.876 Total : 2866.94 716.73 91013.85 27072.46 187368.15 00:19:45.876 00:19:45.876 11:32:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:19:45.876 11:32:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:46.135 11:32:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:19:46.135 11:32:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:19:46.135 11:32:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:19:46.135 11:32:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:46.135 11:32:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:19:46.135 11:32:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:46.135 11:32:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:19:46.135 11:32:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:46.135 11:32:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:46.135 rmmod nvme_tcp 00:19:46.135 rmmod nvme_fabrics 00:19:46.135 rmmod nvme_keyring 00:19:46.135 11:32:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:46.135 11:32:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:19:46.135 11:32:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:19:46.135 11:32:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 87163 ']' 00:19:46.135 11:32:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 87163 00:19:46.135 11:32:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 87163 ']' 00:19:46.135 11:32:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 87163 00:19:46.135 11:32:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:19:46.135 11:32:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:46.135 11:32:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87163 00:19:46.135 11:32:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:46.135 11:32:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:46.135 killing process with pid 87163 00:19:46.135 11:32:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87163' 00:19:46.135 11:32:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 87163 00:19:46.135 11:32:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 87163 00:19:46.756 11:32:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:46.756 11:32:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:46.756 11:32:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:46.756 11:32:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:19:46.756 11:32:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:46.756 11:32:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:19:46.756 11:32:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:19:46.756 11:32:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:46.756 11:32:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:46.756 11:32:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:46.756 11:32:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:46.756 11:32:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:46.756 11:32:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:47.015 11:32:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:47.015 11:32:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:47.015 11:32:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:47.015 11:32:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:47.015 11:32:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:47.015 11:32:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:47.015 11:32:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:47.015 11:32:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:47.015 11:32:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:47.015 11:32:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:47.015 11:32:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:47.015 11:32:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:47.015 11:32:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:47.015 11:32:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:19:47.015 00:19:47.015 real 0m15.254s 00:19:47.015 user 0m55.806s 00:19:47.015 sys 0m3.586s 00:19:47.015 11:32:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:47.015 11:32:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:19:47.015 ************************************ 00:19:47.015 END TEST nvmf_perf 00:19:47.015 ************************************ 00:19:47.015 11:32:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:19:47.015 11:32:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:47.015 11:32:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:47.015 11:32:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.015 ************************************ 00:19:47.015 START TEST nvmf_fio_host 00:19:47.015 ************************************ 00:19:47.015 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:19:47.275 * Looking for test storage... 00:19:47.275 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:47.275 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:47.275 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:19:47.275 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:47.275 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:47.275 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:47.275 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:47.275 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:47.275 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:19:47.275 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:19:47.275 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:19:47.275 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:19:47.275 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:19:47.275 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:19:47.275 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:19:47.275 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:47.275 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:19:47.275 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:19:47.275 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:47.275 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:47.275 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:19:47.275 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:19:47.275 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:47.275 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:19:47.275 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:19:47.275 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:19:47.275 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:19:47.275 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:47.275 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:19:47.275 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:19:47.275 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:47.275 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:47.275 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:19:47.275 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:47.275 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:47.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.275 --rc genhtml_branch_coverage=1 00:19:47.275 --rc genhtml_function_coverage=1 00:19:47.275 --rc genhtml_legend=1 00:19:47.275 --rc geninfo_all_blocks=1 00:19:47.275 --rc geninfo_unexecuted_blocks=1 00:19:47.275 00:19:47.275 ' 00:19:47.275 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:47.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.275 --rc genhtml_branch_coverage=1 00:19:47.275 --rc genhtml_function_coverage=1 00:19:47.275 --rc genhtml_legend=1 00:19:47.275 --rc geninfo_all_blocks=1 00:19:47.275 --rc geninfo_unexecuted_blocks=1 00:19:47.275 00:19:47.275 ' 00:19:47.275 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:47.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.275 --rc genhtml_branch_coverage=1 00:19:47.275 --rc genhtml_function_coverage=1 00:19:47.275 --rc genhtml_legend=1 00:19:47.275 --rc geninfo_all_blocks=1 00:19:47.275 --rc geninfo_unexecuted_blocks=1 00:19:47.275 00:19:47.275 ' 00:19:47.275 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:47.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.275 --rc genhtml_branch_coverage=1 00:19:47.275 --rc genhtml_function_coverage=1 00:19:47.275 --rc genhtml_legend=1 00:19:47.275 --rc geninfo_all_blocks=1 00:19:47.275 --rc geninfo_unexecuted_blocks=1 00:19:47.275 00:19:47.275 ' 00:19:47.275 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:47.275 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:19:47.275 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:47.275 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:47.275 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:47.275 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.275 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.275 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.275 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:19:47.275 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.275 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:47.275 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:19:47.275 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:47.275 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:47.275 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:47.275 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:47.275 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:47.275 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:47.275 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:47.275 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:47.275 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:47.276 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:47.276 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:19:47.276 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=806d442c-08ba-4719-b76d-33169283459a 00:19:47.276 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:47.276 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:47.276 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:47.276 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:47.276 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:47.276 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:19:47.276 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:47.276 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:47.276 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:47.276 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.276 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.276 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.276 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:19:47.276 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.276 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:19:47.276 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:47.276 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:47.276 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:47.276 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:47.276 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:47.276 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:47.276 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:47.276 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:47.276 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:47.276 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:47.276 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:47.276 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:19:47.276 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:47.276 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:47.276 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:47.276 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:47.276 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:47.276 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:47.276 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:47.276 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:47.276 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:47.276 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:47.276 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:47.276 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:47.276 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:47.276 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:47.276 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:47.276 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:47.276 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:47.276 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:47.276 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:47.276 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:47.276 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:47.276 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:47.276 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:47.276 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:47.276 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:47.276 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:47.276 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:47.276 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:47.276 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:47.276 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:47.276 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:47.276 Cannot find device "nvmf_init_br" 00:19:47.276 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:19:47.276 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:47.276 Cannot find device "nvmf_init_br2" 00:19:47.276 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:19:47.276 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:47.276 Cannot find device "nvmf_tgt_br" 00:19:47.276 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:19:47.276 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:47.276 Cannot find device "nvmf_tgt_br2" 00:19:47.276 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:19:47.276 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:47.276 Cannot find device "nvmf_init_br" 00:19:47.276 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:19:47.276 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:47.276 Cannot find device "nvmf_init_br2" 00:19:47.276 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:19:47.276 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:47.535 Cannot find device "nvmf_tgt_br" 00:19:47.535 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:19:47.535 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:47.535 Cannot find device "nvmf_tgt_br2" 00:19:47.535 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:19:47.535 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:47.535 Cannot find device "nvmf_br" 00:19:47.535 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:19:47.535 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:47.535 Cannot find device "nvmf_init_if" 00:19:47.535 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:19:47.535 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:47.535 Cannot find device "nvmf_init_if2" 00:19:47.535 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:19:47.535 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:47.535 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:47.535 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:19:47.535 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:47.535 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:47.535 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:19:47.535 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:47.535 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:47.535 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:47.535 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:47.535 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:47.535 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:47.535 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:47.535 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:47.535 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:47.535 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:47.535 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:47.535 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:47.535 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:47.535 11:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:47.535 11:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:47.535 11:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:47.535 11:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:47.535 11:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:47.535 11:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:47.535 11:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:47.535 11:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:47.535 11:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:47.536 11:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:47.536 11:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:47.536 11:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:47.536 11:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:47.536 11:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:47.536 11:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:47.793 11:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:47.793 11:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:47.793 11:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:47.793 11:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:47.793 11:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:47.793 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:47.793 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.097 ms 00:19:47.793 00:19:47.793 --- 10.0.0.3 ping statistics --- 00:19:47.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:47.793 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:19:47.793 11:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:47.793 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:47.793 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:19:47.793 00:19:47.793 --- 10.0.0.4 ping statistics --- 00:19:47.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:47.793 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:19:47.793 11:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:47.793 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:47.793 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:19:47.793 00:19:47.793 --- 10.0.0.1 ping statistics --- 00:19:47.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:47.793 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:19:47.793 11:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:47.793 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:47.793 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:19:47.793 00:19:47.793 --- 10.0.0.2 ping statistics --- 00:19:47.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:47.793 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:19:47.793 11:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:47.793 11:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@461 -- # return 0 00:19:47.793 11:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:47.793 11:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:47.793 11:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:47.793 11:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:47.793 11:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:47.793 11:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:47.793 11:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:47.793 11:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:19:47.793 11:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:19:47.793 11:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:47.793 11:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.793 11:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=87699 00:19:47.793 11:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:47.793 11:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:47.793 11:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 87699 00:19:47.793 11:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 87699 ']' 00:19:47.793 11:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:47.793 11:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:47.793 11:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:47.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:47.793 11:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:47.793 11:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.793 [2024-11-20 11:32:33.249003] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:19:47.793 [2024-11-20 11:32:33.249095] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:48.051 [2024-11-20 11:32:33.402303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:48.051 [2024-11-20 11:32:33.442015] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:48.051 [2024-11-20 11:32:33.442068] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:48.051 [2024-11-20 11:32:33.442081] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:48.051 [2024-11-20 11:32:33.442091] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:48.051 [2024-11-20 11:32:33.442100] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:48.051 [2024-11-20 11:32:33.442941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:48.051 [2024-11-20 11:32:33.443025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:48.051 [2024-11-20 11:32:33.443154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:48.051 [2024-11-20 11:32:33.443160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:48.051 11:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:48.051 11:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:19:48.051 11:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:48.308 [2024-11-20 11:32:33.831389] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:48.308 11:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:19:48.308 11:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:48.308 11:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.566 11:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:19:48.824 Malloc1 00:19:48.824 11:32:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:49.081 11:32:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:49.339 11:32:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:49.905 [2024-11-20 11:32:35.210046] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:49.905 11:32:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:19:50.163 11:32:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:19:50.164 11:32:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:19:50.164 11:32:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:19:50.164 11:32:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:50.164 11:32:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:50.164 11:32:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:50.164 11:32:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:50.164 11:32:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:19:50.164 11:32:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:50.164 11:32:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:50.164 11:32:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:19:50.164 11:32:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:50.164 11:32:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:50.164 11:32:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:50.164 11:32:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:50.164 11:32:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:50.164 11:32:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:50.164 11:32:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:19:50.164 11:32:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:50.164 11:32:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:50.164 11:32:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:50.164 11:32:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:19:50.164 11:32:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:19:50.423 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:19:50.423 fio-3.35 00:19:50.423 Starting 1 thread 00:19:52.956 00:19:52.956 test: (groupid=0, jobs=1): err= 0: pid=87815: Wed Nov 20 11:32:38 2024 00:19:52.956 read: IOPS=8674, BW=33.9MiB/s (35.5MB/s)(68.0MiB/2007msec) 00:19:52.956 slat (usec): min=2, max=316, avg= 2.91, stdev= 3.26 00:19:52.956 clat (usec): min=3462, max=13436, avg=7735.96, stdev=640.92 00:19:52.956 lat (usec): min=3508, max=13439, avg=7738.87, stdev=640.72 00:19:52.956 clat percentiles (usec): 00:19:52.956 | 1.00th=[ 6456], 5.00th=[ 6849], 10.00th=[ 7046], 20.00th=[ 7308], 00:19:52.956 | 30.00th=[ 7439], 40.00th=[ 7570], 50.00th=[ 7701], 60.00th=[ 7832], 00:19:52.956 | 70.00th=[ 7963], 80.00th=[ 8160], 90.00th=[ 8455], 95.00th=[ 8717], 00:19:52.956 | 99.00th=[ 9896], 99.50th=[10421], 99.90th=[12256], 99.95th=[12649], 00:19:52.956 | 99.99th=[13435] 00:19:52.956 bw ( KiB/s): min=33880, max=35480, per=99.98%, avg=34690.00, stdev=855.25, samples=4 00:19:52.956 iops : min= 8470, max= 8870, avg=8672.50, stdev=213.81, samples=4 00:19:52.956 write: IOPS=8666, BW=33.9MiB/s (35.5MB/s)(67.9MiB/2007msec); 0 zone resets 00:19:52.956 slat (usec): min=2, max=315, avg= 3.01, stdev= 2.71 00:19:52.956 clat (usec): min=2504, max=13168, avg=6967.33, stdev=578.08 00:19:52.956 lat (usec): min=2518, max=13171, avg=6970.34, stdev=577.94 00:19:52.956 clat percentiles (usec): 00:19:52.956 | 1.00th=[ 5735], 5.00th=[ 6194], 10.00th=[ 6390], 20.00th=[ 6587], 00:19:52.956 | 30.00th=[ 6718], 40.00th=[ 6849], 50.00th=[ 6915], 60.00th=[ 7046], 00:19:52.956 | 70.00th=[ 7177], 80.00th=[ 7308], 90.00th=[ 7504], 95.00th=[ 7767], 00:19:52.956 | 99.00th=[ 8848], 99.50th=[ 9372], 99.90th=[11338], 99.95th=[12387], 00:19:52.956 | 99.99th=[13042] 00:19:52.956 bw ( KiB/s): min=34240, max=35072, per=99.99%, avg=34662.00, stdev=350.99, samples=4 00:19:52.956 iops : min= 8560, max= 8768, avg=8665.50, stdev=87.75, samples=4 00:19:52.956 lat (msec) : 4=0.07%, 10=99.42%, 20=0.51% 00:19:52.956 cpu : usr=66.15%, sys=24.88%, ctx=9, majf=0, minf=7 00:19:52.956 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:19:52.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:52.956 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:52.956 issued rwts: total=17409,17394,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:52.956 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:52.956 00:19:52.956 Run status group 0 (all jobs): 00:19:52.956 READ: bw=33.9MiB/s (35.5MB/s), 33.9MiB/s-33.9MiB/s (35.5MB/s-35.5MB/s), io=68.0MiB (71.3MB), run=2007-2007msec 00:19:52.956 WRITE: bw=33.9MiB/s (35.5MB/s), 33.9MiB/s-33.9MiB/s (35.5MB/s-35.5MB/s), io=67.9MiB (71.2MB), run=2007-2007msec 00:19:52.956 11:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:19:52.956 11:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:19:52.956 11:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:52.956 11:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:52.956 11:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:52.956 11:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:52.956 11:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:19:52.956 11:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:52.956 11:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:52.956 11:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:52.956 11:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:19:52.956 11:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:52.956 11:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:52.956 11:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:52.956 11:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:52.956 11:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:52.956 11:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:19:52.956 11:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:52.956 11:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:52.956 11:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:52.956 11:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:19:52.956 11:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:19:52.956 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:19:52.956 fio-3.35 00:19:52.956 Starting 1 thread 00:19:55.490 00:19:55.490 test: (groupid=0, jobs=1): err= 0: pid=87865: Wed Nov 20 11:32:40 2024 00:19:55.490 read: IOPS=7440, BW=116MiB/s (122MB/s)(233MiB/2007msec) 00:19:55.490 slat (usec): min=3, max=124, avg= 4.42, stdev= 2.11 00:19:55.490 clat (usec): min=2965, max=19777, avg=10107.29, stdev=2450.36 00:19:55.490 lat (usec): min=2969, max=19782, avg=10111.71, stdev=2450.52 00:19:55.490 clat percentiles (usec): 00:19:55.490 | 1.00th=[ 5342], 5.00th=[ 6390], 10.00th=[ 6980], 20.00th=[ 7832], 00:19:55.490 | 30.00th=[ 8586], 40.00th=[ 9241], 50.00th=[10028], 60.00th=[10814], 00:19:55.490 | 70.00th=[11469], 80.00th=[12387], 90.00th=[13042], 95.00th=[14091], 00:19:55.490 | 99.00th=[16188], 99.50th=[16712], 99.90th=[18220], 99.95th=[18482], 00:19:55.490 | 99.99th=[19792] 00:19:55.490 bw ( KiB/s): min=50208, max=73216, per=51.99%, avg=61888.00, stdev=10547.81, samples=4 00:19:55.490 iops : min= 3138, max= 4576, avg=3868.00, stdev=659.24, samples=4 00:19:55.490 write: IOPS=4527, BW=70.7MiB/s (74.2MB/s)(126MiB/1782msec); 0 zone resets 00:19:55.490 slat (usec): min=37, max=395, avg=42.31, stdev= 9.21 00:19:55.490 clat (usec): min=4313, max=21089, avg=12063.59, stdev=1966.27 00:19:55.490 lat (usec): min=4352, max=21136, avg=12105.90, stdev=1967.44 00:19:55.490 clat percentiles (usec): 00:19:55.490 | 1.00th=[ 7832], 5.00th=[ 9372], 10.00th=[ 9896], 20.00th=[10421], 00:19:55.490 | 30.00th=[10945], 40.00th=[11469], 50.00th=[11863], 60.00th=[12387], 00:19:55.490 | 70.00th=[12911], 80.00th=[13435], 90.00th=[14484], 95.00th=[15664], 00:19:55.490 | 99.00th=[17957], 99.50th=[18482], 99.90th=[19530], 99.95th=[19792], 00:19:55.490 | 99.99th=[21103] 00:19:55.490 bw ( KiB/s): min=51872, max=76032, per=88.60%, avg=64184.00, stdev=11128.78, samples=4 00:19:55.490 iops : min= 3242, max= 4752, avg=4011.50, stdev=695.55, samples=4 00:19:55.490 lat (msec) : 4=0.07%, 10=36.41%, 20=63.51%, 50=0.01% 00:19:55.490 cpu : usr=69.79%, sys=19.69%, ctx=4, majf=0, minf=20 00:19:55.490 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:19:55.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.490 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:55.490 issued rwts: total=14933,8068,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:55.490 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:55.490 00:19:55.490 Run status group 0 (all jobs): 00:19:55.490 READ: bw=116MiB/s (122MB/s), 116MiB/s-116MiB/s (122MB/s-122MB/s), io=233MiB (245MB), run=2007-2007msec 00:19:55.490 WRITE: bw=70.7MiB/s (74.2MB/s), 70.7MiB/s-70.7MiB/s (74.2MB/s-74.2MB/s), io=126MiB (132MB), run=1782-1782msec 00:19:55.490 11:32:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:55.490 11:32:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:19:55.490 11:32:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:19:55.490 11:32:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:19:55.490 11:32:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:19:55.490 11:32:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:55.490 11:32:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:19:55.490 11:32:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:55.490 11:32:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:19:55.490 11:32:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:55.490 11:32:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:55.490 rmmod nvme_tcp 00:19:55.490 rmmod nvme_fabrics 00:19:55.490 rmmod nvme_keyring 00:19:55.490 11:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:55.490 11:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:19:55.490 11:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:19:55.490 11:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 87699 ']' 00:19:55.490 11:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 87699 00:19:55.490 11:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 87699 ']' 00:19:55.490 11:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 87699 00:19:55.490 11:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:19:55.490 11:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:55.490 11:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87699 00:19:55.490 11:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:55.490 11:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:55.490 killing process with pid 87699 00:19:55.490 11:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87699' 00:19:55.490 11:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 87699 00:19:55.490 11:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 87699 00:19:55.749 11:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:55.749 11:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:55.749 11:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:55.749 11:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:19:55.749 11:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:55.749 11:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:19:55.749 11:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:19:55.749 11:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:55.749 11:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:55.749 11:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:55.749 11:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:55.749 11:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:55.749 11:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:55.749 11:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:55.749 11:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:55.749 11:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:55.749 11:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:55.749 11:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:56.010 11:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:56.010 11:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:56.010 11:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:56.010 11:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:56.010 11:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:56.010 11:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:56.010 11:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:56.010 11:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:56.010 11:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:19:56.010 00:19:56.010 real 0m8.871s 00:19:56.010 user 0m35.668s 00:19:56.010 sys 0m2.365s 00:19:56.010 11:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:56.010 11:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.010 ************************************ 00:19:56.010 END TEST nvmf_fio_host 00:19:56.010 ************************************ 00:19:56.010 11:32:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:19:56.010 11:32:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:56.010 11:32:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:56.010 11:32:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.010 ************************************ 00:19:56.010 START TEST nvmf_failover 00:19:56.010 ************************************ 00:19:56.010 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:19:56.010 * Looking for test storage... 00:19:56.010 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:56.010 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:56.010 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:19:56.010 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:56.271 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:56.271 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:56.271 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:56.271 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:56.271 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:19:56.271 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:19:56.271 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:19:56.271 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:19:56.271 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:19:56.271 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:19:56.271 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:19:56.271 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:56.271 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:19:56.271 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:19:56.271 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:56.271 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:56.271 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:19:56.271 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:19:56.271 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:56.271 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:19:56.271 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:19:56.271 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:19:56.271 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:19:56.271 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:56.271 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:19:56.271 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:19:56.271 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:56.271 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:56.271 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:19:56.271 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:56.271 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:56.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:56.271 --rc genhtml_branch_coverage=1 00:19:56.271 --rc genhtml_function_coverage=1 00:19:56.271 --rc genhtml_legend=1 00:19:56.271 --rc geninfo_all_blocks=1 00:19:56.271 --rc geninfo_unexecuted_blocks=1 00:19:56.271 00:19:56.271 ' 00:19:56.271 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:56.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:56.271 --rc genhtml_branch_coverage=1 00:19:56.271 --rc genhtml_function_coverage=1 00:19:56.271 --rc genhtml_legend=1 00:19:56.271 --rc geninfo_all_blocks=1 00:19:56.271 --rc geninfo_unexecuted_blocks=1 00:19:56.271 00:19:56.271 ' 00:19:56.271 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:56.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:56.271 --rc genhtml_branch_coverage=1 00:19:56.271 --rc genhtml_function_coverage=1 00:19:56.271 --rc genhtml_legend=1 00:19:56.271 --rc geninfo_all_blocks=1 00:19:56.271 --rc geninfo_unexecuted_blocks=1 00:19:56.271 00:19:56.271 ' 00:19:56.271 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:56.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:56.271 --rc genhtml_branch_coverage=1 00:19:56.271 --rc genhtml_function_coverage=1 00:19:56.271 --rc genhtml_legend=1 00:19:56.271 --rc geninfo_all_blocks=1 00:19:56.271 --rc geninfo_unexecuted_blocks=1 00:19:56.271 00:19:56.271 ' 00:19:56.271 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:56.271 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:19:56.271 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:56.271 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:56.272 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:56.272 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:56.272 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:56.272 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:56.272 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:56.272 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:56.272 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:56.272 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:56.272 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:19:56.272 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=806d442c-08ba-4719-b76d-33169283459a 00:19:56.272 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:56.272 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:56.272 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:56.272 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:56.272 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:56.272 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:19:56.272 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:56.272 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:56.272 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:56.272 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.272 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.272 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.272 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:19:56.272 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.272 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:19:56.272 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:56.272 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:56.272 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:56.272 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:56.272 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:56.272 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:56.272 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:56.272 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:56.272 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:56.272 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:56.272 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:56.272 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:56.272 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:56.272 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:56.272 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:19:56.272 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:56.272 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:56.272 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:56.272 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:56.272 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:56.272 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:56.272 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:56.272 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:56.272 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:56.272 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:56.272 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:56.272 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:56.272 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:56.272 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:56.272 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:56.272 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:56.272 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:56.272 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:56.272 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:56.272 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:56.272 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:56.272 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:56.272 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:56.272 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:56.272 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:56.272 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:56.272 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:56.272 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:56.272 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:56.272 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:56.272 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:56.272 Cannot find device "nvmf_init_br" 00:19:56.272 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:19:56.272 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:56.272 Cannot find device "nvmf_init_br2" 00:19:56.272 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:19:56.272 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:56.273 Cannot find device "nvmf_tgt_br" 00:19:56.273 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:19:56.273 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:56.273 Cannot find device "nvmf_tgt_br2" 00:19:56.273 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:19:56.273 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:56.273 Cannot find device "nvmf_init_br" 00:19:56.273 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:19:56.273 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:56.273 Cannot find device "nvmf_init_br2" 00:19:56.273 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:19:56.273 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:56.273 Cannot find device "nvmf_tgt_br" 00:19:56.273 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:19:56.273 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:56.273 Cannot find device "nvmf_tgt_br2" 00:19:56.273 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:19:56.273 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:56.273 Cannot find device "nvmf_br" 00:19:56.273 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:19:56.273 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:56.273 Cannot find device "nvmf_init_if" 00:19:56.273 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:19:56.273 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:56.273 Cannot find device "nvmf_init_if2" 00:19:56.273 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:19:56.273 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:56.273 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:56.273 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:19:56.273 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:56.273 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:56.273 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:19:56.273 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:56.273 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:56.532 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:56.532 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:56.532 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:56.532 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:56.532 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:56.532 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:56.532 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:56.532 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:56.532 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:56.532 11:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:56.532 11:32:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:56.532 11:32:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:56.532 11:32:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:56.532 11:32:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:56.532 11:32:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:56.532 11:32:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:56.532 11:32:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:56.532 11:32:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:56.532 11:32:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:56.532 11:32:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:56.532 11:32:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:56.532 11:32:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:56.532 11:32:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:56.532 11:32:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:56.532 11:32:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:56.532 11:32:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:56.532 11:32:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:56.532 11:32:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:56.532 11:32:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:56.532 11:32:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:56.792 11:32:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:56.792 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:56.792 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.095 ms 00:19:56.792 00:19:56.792 --- 10.0.0.3 ping statistics --- 00:19:56.792 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:56.792 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:19:56.792 11:32:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:56.792 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:56.792 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:19:56.792 00:19:56.792 --- 10.0.0.4 ping statistics --- 00:19:56.792 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:56.792 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:19:56.792 11:32:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:56.792 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:56.792 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:19:56.792 00:19:56.792 --- 10.0.0.1 ping statistics --- 00:19:56.792 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:56.792 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:19:56.792 11:32:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:56.792 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:56.792 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:19:56.792 00:19:56.792 --- 10.0.0.2 ping statistics --- 00:19:56.792 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:56.792 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:19:56.792 11:32:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:56.792 11:32:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@461 -- # return 0 00:19:56.792 11:32:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:56.792 11:32:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:56.792 11:32:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:56.792 11:32:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:56.792 11:32:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:56.792 11:32:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:56.792 11:32:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:56.792 11:32:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:19:56.792 11:32:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:56.792 11:32:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:56.792 11:32:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:56.792 11:32:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=88131 00:19:56.792 11:32:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:19:56.792 11:32:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 88131 00:19:56.792 11:32:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 88131 ']' 00:19:56.792 11:32:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:56.792 11:32:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:56.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:56.792 11:32:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:56.792 11:32:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:56.792 11:32:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:56.792 [2024-11-20 11:32:42.227440] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:19:56.792 [2024-11-20 11:32:42.227555] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:56.792 [2024-11-20 11:32:42.376020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:57.051 [2024-11-20 11:32:42.409084] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:57.051 [2024-11-20 11:32:42.409483] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:57.051 [2024-11-20 11:32:42.409688] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:57.051 [2024-11-20 11:32:42.409909] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:57.051 [2024-11-20 11:32:42.410044] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:57.051 [2024-11-20 11:32:42.410839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:57.051 [2024-11-20 11:32:42.411124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:57.051 [2024-11-20 11:32:42.411136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:57.051 11:32:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:57.051 11:32:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:19:57.051 11:32:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:57.051 11:32:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:57.051 11:32:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:57.051 11:32:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:57.051 11:32:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:57.310 [2024-11-20 11:32:42.825690] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:57.310 11:32:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:57.876 Malloc0 00:19:57.876 11:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:57.876 11:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:58.443 11:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:58.443 [2024-11-20 11:32:43.969725] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:58.443 11:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:58.701 [2024-11-20 11:32:44.233999] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:58.701 11:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:19:58.960 [2024-11-20 11:32:44.498526] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:19:58.960 11:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=88235 00:19:58.960 11:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:58.960 11:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 88235 /var/tmp/bdevperf.sock 00:19:58.960 11:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:19:58.960 11:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 88235 ']' 00:19:58.960 11:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:58.960 11:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:58.960 11:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:58.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:58.960 11:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:58.960 11:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:59.553 11:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:59.553 11:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:19:59.553 11:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:19:59.811 NVMe0n1 00:19:59.811 11:32:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:20:00.069 00:20:00.069 11:32:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=88269 00:20:00.070 11:32:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:00.070 11:32:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:20:01.443 11:32:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:01.443 [2024-11-20 11:32:46.951576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58830 is same with the state(6) to be set 00:20:01.443 [2024-11-20 11:32:46.951668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58830 is same with the state(6) to be set 00:20:01.444 [2024-11-20 11:32:46.951687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58830 is same with the state(6) to be set 00:20:01.444 [2024-11-20 11:32:46.951701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58830 is same with the state(6) to be set 00:20:01.444 [2024-11-20 11:32:46.951714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58830 is same with the state(6) to be set 00:20:01.444 [2024-11-20 11:32:46.951727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58830 is same with the state(6) to be set 00:20:01.444 [2024-11-20 11:32:46.951740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58830 is same with the state(6) to be set 00:20:01.444 [2024-11-20 11:32:46.951754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58830 is same with the state(6) to be set 00:20:01.444 [2024-11-20 11:32:46.951766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58830 is same with the state(6) to be set 00:20:01.444 [2024-11-20 11:32:46.951779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58830 is same with the state(6) to be set 00:20:01.444 [2024-11-20 11:32:46.951792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58830 is same with the state(6) to be set 00:20:01.444 [2024-11-20 11:32:46.951805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58830 is same with the state(6) to be set 00:20:01.444 [2024-11-20 11:32:46.951818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58830 is same with the state(6) to be set 00:20:01.444 [2024-11-20 11:32:46.951831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58830 is same with the state(6) to be set 00:20:01.444 [2024-11-20 11:32:46.951845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58830 is same with the state(6) to be set 00:20:01.444 [2024-11-20 11:32:46.951858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58830 is same with the state(6) to be set 00:20:01.444 [2024-11-20 11:32:46.951871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58830 is same with the state(6) to be set 00:20:01.444 [2024-11-20 11:32:46.951884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58830 is same with the state(6) to be set 00:20:01.444 [2024-11-20 11:32:46.951896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58830 is same with the state(6) to be set 00:20:01.444 [2024-11-20 11:32:46.951910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58830 is same with the state(6) to be set 00:20:01.444 [2024-11-20 11:32:46.951923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58830 is same with the state(6) to be set 00:20:01.444 [2024-11-20 11:32:46.951937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58830 is same with the state(6) to be set 00:20:01.444 [2024-11-20 11:32:46.951950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58830 is same with the state(6) to be set 00:20:01.444 [2024-11-20 11:32:46.951972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58830 is same with the state(6) to be set 00:20:01.444 [2024-11-20 11:32:46.951986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58830 is same with the state(6) to be set 00:20:01.444 [2024-11-20 11:32:46.951999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58830 is same with the state(6) to be set 00:20:01.444 [2024-11-20 11:32:46.952012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58830 is same with the state(6) to be set 00:20:01.444 [2024-11-20 11:32:46.952025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58830 is same with the state(6) to be set 00:20:01.444 [2024-11-20 11:32:46.952039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58830 is same with the state(6) to be set 00:20:01.444 [2024-11-20 11:32:46.952052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58830 is same with the state(6) to be set 00:20:01.444 [2024-11-20 11:32:46.952065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58830 is same with the state(6) to be set 00:20:01.444 [2024-11-20 11:32:46.952078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58830 is same with the state(6) to be set 00:20:01.444 [2024-11-20 11:32:46.952091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58830 is same with the state(6) to be set 00:20:01.444 [2024-11-20 11:32:46.952104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58830 is same with the state(6) to be set 00:20:01.444 [2024-11-20 11:32:46.952117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58830 is same with the state(6) to be set 00:20:01.444 [2024-11-20 11:32:46.952130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58830 is same with the state(6) to be set 00:20:01.444 [2024-11-20 11:32:46.952143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58830 is same with the state(6) to be set 00:20:01.444 [2024-11-20 11:32:46.952154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58830 is same with the state(6) to be set 00:20:01.444 [2024-11-20 11:32:46.952168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58830 is same with the state(6) to be set 00:20:01.444 [2024-11-20 11:32:46.952181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58830 is same with the state(6) to be set 00:20:01.444 [2024-11-20 11:32:46.952195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58830 is same with the state(6) to be set 00:20:01.444 [2024-11-20 11:32:46.952208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58830 is same with the state(6) to be set 00:20:01.444 [2024-11-20 11:32:46.952221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58830 is same with the state(6) to be set 00:20:01.444 [2024-11-20 11:32:46.952233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58830 is same with the state(6) to be set 00:20:01.444 [2024-11-20 11:32:46.952245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58830 is same with the state(6) to be set 00:20:01.444 [2024-11-20 11:32:46.952258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58830 is same with the state(6) to be set 00:20:01.444 [2024-11-20 11:32:46.952271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58830 is same with the state(6) to be set 00:20:01.444 [2024-11-20 11:32:46.952284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58830 is same with the state(6) to be set 00:20:01.444 [2024-11-20 11:32:46.952297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58830 is same with the state(6) to be set 00:20:01.444 [2024-11-20 11:32:46.952753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58830 is same with the state(6) to be set 00:20:01.444 [2024-11-20 11:32:46.952793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58830 is same with the state(6) to be set 00:20:01.444 [2024-11-20 11:32:46.952808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58830 is same with the state(6) to be set 00:20:01.444 [2024-11-20 11:32:46.952821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58830 is same with the state(6) to be set 00:20:01.444 [2024-11-20 11:32:46.952834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58830 is same with the state(6) to be set 00:20:01.444 [2024-11-20 11:32:46.952847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58830 is same with the state(6) to be set 00:20:01.444 [2024-11-20 11:32:46.952861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58830 is same with the state(6) to be set 00:20:01.444 [2024-11-20 11:32:46.952873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58830 is same with the state(6) to be set 00:20:01.444 [2024-11-20 11:32:46.953143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58830 is same with the state(6) to be set 00:20:01.444 [2024-11-20 11:32:46.953173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58830 is same with the state(6) to be set 00:20:01.444 [2024-11-20 11:32:46.953189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58830 is same with the state(6) to be set 00:20:01.444 [2024-11-20 11:32:46.953203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58830 is same with the state(6) to be set 00:20:01.444 [2024-11-20 11:32:46.953218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58830 is same with the state(6) to be set 00:20:01.444 [2024-11-20 11:32:46.953231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58830 is same with the state(6) to be set 00:20:01.444 [2024-11-20 11:32:46.953244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58830 is same with the state(6) to be set 00:20:01.444 [2024-11-20 11:32:46.953257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58830 is same with the state(6) to be set 00:20:01.444 [2024-11-20 11:32:46.954135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58830 is same with the state(6) to be set 00:20:01.444 [2024-11-20 11:32:46.954176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58830 is same with the state(6) to be set 00:20:01.444 [2024-11-20 11:32:46.954193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58830 is same with the state(6) to be set 00:20:01.445 [2024-11-20 11:32:46.954206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58830 is same with the state(6) to be set 00:20:01.445 [2024-11-20 11:32:46.954219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58830 is same with the state(6) to be set 00:20:01.445 11:32:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:20:04.734 11:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:20:04.992 00:20:05.249 11:32:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:20:05.515 [2024-11-20 11:32:50.946922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e59600 is same with the state(6) to be set 00:20:05.515 [2024-11-20 11:32:50.946978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e59600 is same with the state(6) to be set 00:20:05.515 [2024-11-20 11:32:50.946989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e59600 is same with the state(6) to be set 00:20:05.515 [2024-11-20 11:32:50.946998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e59600 is same with the state(6) to be set 00:20:05.515 [2024-11-20 11:32:50.947007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e59600 is same with the state(6) to be set 00:20:05.515 [2024-11-20 11:32:50.947015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e59600 is same with the state(6) to be set 00:20:05.515 [2024-11-20 11:32:50.947023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e59600 is same with the state(6) to be set 00:20:05.515 11:32:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:20:08.818 11:32:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:08.818 [2024-11-20 11:32:54.247589] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:08.818 11:32:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:20:09.752 11:32:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:20:10.011 [2024-11-20 11:32:55.536690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa3930 is same with the state(6) to be set 00:20:10.011 [2024-11-20 11:32:55.536741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa3930 is same with the state(6) to be set 00:20:10.011 [2024-11-20 11:32:55.536752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa3930 is same with the state(6) to be set 00:20:10.011 [2024-11-20 11:32:55.536761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa3930 is same with the state(6) to be set 00:20:10.011 [2024-11-20 11:32:55.536769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa3930 is same with the state(6) to be set 00:20:10.011 [2024-11-20 11:32:55.536777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa3930 is same with the state(6) to be set 00:20:10.011 11:32:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 88269 00:20:15.281 { 00:20:15.281 "results": [ 00:20:15.281 { 00:20:15.281 "job": "NVMe0n1", 00:20:15.281 "core_mask": "0x1", 00:20:15.281 "workload": "verify", 00:20:15.281 "status": "finished", 00:20:15.281 "verify_range": { 00:20:15.281 "start": 0, 00:20:15.281 "length": 16384 00:20:15.281 }, 00:20:15.281 "queue_depth": 128, 00:20:15.281 "io_size": 4096, 00:20:15.281 "runtime": 15.008341, 00:20:15.281 "iops": 8023.671636991723, 00:20:15.281 "mibps": 31.342467331998918, 00:20:15.281 "io_failed": 3293, 00:20:15.281 "io_timeout": 0, 00:20:15.281 "avg_latency_us": 15493.029064896225, 00:20:15.281 "min_latency_us": 621.8472727272728, 00:20:15.281 "max_latency_us": 37176.785454545454 00:20:15.281 } 00:20:15.281 ], 00:20:15.281 "core_count": 1 00:20:15.281 } 00:20:15.281 11:33:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 88235 00:20:15.281 11:33:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 88235 ']' 00:20:15.281 11:33:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 88235 00:20:15.281 11:33:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:20:15.281 11:33:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:15.281 11:33:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88235 00:20:15.281 killing process with pid 88235 00:20:15.281 11:33:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:15.281 11:33:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:15.281 11:33:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88235' 00:20:15.281 11:33:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 88235 00:20:15.281 11:33:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 88235 00:20:15.550 11:33:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:15.550 [2024-11-20 11:32:44.583277] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:20:15.550 [2024-11-20 11:32:44.583458] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88235 ] 00:20:15.550 [2024-11-20 11:32:44.735685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:15.550 [2024-11-20 11:32:44.768948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:15.550 Running I/O for 15 seconds... 00:20:15.550 8192.00 IOPS, 32.00 MiB/s [2024-11-20T11:33:01.139Z] [2024-11-20 11:32:46.955004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:74752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.550 [2024-11-20 11:32:46.955058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.550 [2024-11-20 11:32:46.955088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:74760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.550 [2024-11-20 11:32:46.955111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.550 [2024-11-20 11:32:46.955129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:74768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.550 [2024-11-20 11:32:46.955144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.550 [2024-11-20 11:32:46.955160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:74776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.550 [2024-11-20 11:32:46.955175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.550 [2024-11-20 11:32:46.955195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:74784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.550 [2024-11-20 11:32:46.955220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.550 [2024-11-20 11:32:46.955245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:74792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.550 [2024-11-20 11:32:46.955265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.550 [2024-11-20 11:32:46.955287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:74800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.551 [2024-11-20 11:32:46.955307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.551 [2024-11-20 11:32:46.955330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:74808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.551 [2024-11-20 11:32:46.955350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.551 [2024-11-20 11:32:46.955373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:74816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.551 [2024-11-20 11:32:46.955393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.551 [2024-11-20 11:32:46.955416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:74824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.551 [2024-11-20 11:32:46.955436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.551 [2024-11-20 11:32:46.955459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:74832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.551 [2024-11-20 11:32:46.955479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.551 [2024-11-20 11:32:46.955537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:74840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.551 [2024-11-20 11:32:46.955559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.551 [2024-11-20 11:32:46.955582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:74848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.551 [2024-11-20 11:32:46.955603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.551 [2024-11-20 11:32:46.955650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:74856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.551 [2024-11-20 11:32:46.955667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.551 [2024-11-20 11:32:46.955684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:74864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.551 [2024-11-20 11:32:46.955699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.551 [2024-11-20 11:32:46.955716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:74872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.551 [2024-11-20 11:32:46.955730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.551 [2024-11-20 11:32:46.955754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:74880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.551 [2024-11-20 11:32:46.955775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.551 [2024-11-20 11:32:46.955804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:74888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.551 [2024-11-20 11:32:46.955825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.551 [2024-11-20 11:32:46.955848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:74896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.551 [2024-11-20 11:32:46.955869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.551 [2024-11-20 11:32:46.955891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:74904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.551 [2024-11-20 11:32:46.955911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.551 [2024-11-20 11:32:46.955934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:74912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.551 [2024-11-20 11:32:46.955955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.551 [2024-11-20 11:32:46.955980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:74920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.551 [2024-11-20 11:32:46.956002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.551 [2024-11-20 11:32:46.956020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:74928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.551 [2024-11-20 11:32:46.956035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.551 [2024-11-20 11:32:46.956051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:74936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.551 [2024-11-20 11:32:46.956077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.551 [2024-11-20 11:32:46.956094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:74944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.551 [2024-11-20 11:32:46.956110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.551 [2024-11-20 11:32:46.956127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:74952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.551 [2024-11-20 11:32:46.956141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.551 [2024-11-20 11:32:46.956158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:74960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.551 [2024-11-20 11:32:46.956172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.551 [2024-11-20 11:32:46.956189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:74968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.551 [2024-11-20 11:32:46.956203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.551 [2024-11-20 11:32:46.956225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:74976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.551 [2024-11-20 11:32:46.956240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.551 [2024-11-20 11:32:46.956256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:74984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.551 [2024-11-20 11:32:46.956270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.551 [2024-11-20 11:32:46.956287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:74992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.551 [2024-11-20 11:32:46.956301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.551 [2024-11-20 11:32:46.956317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:75000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.551 [2024-11-20 11:32:46.956332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.551 [2024-11-20 11:32:46.956348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:75008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.551 [2024-11-20 11:32:46.956363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.551 [2024-11-20 11:32:46.956382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:75016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.551 [2024-11-20 11:32:46.956397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.551 [2024-11-20 11:32:46.956413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:75024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.551 [2024-11-20 11:32:46.956428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.551 [2024-11-20 11:32:46.956444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:75032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.551 [2024-11-20 11:32:46.956459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.551 [2024-11-20 11:32:46.956483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.551 [2024-11-20 11:32:46.956498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.551 [2024-11-20 11:32:46.956515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:75048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.551 [2024-11-20 11:32:46.956530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.551 [2024-11-20 11:32:46.956547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:75056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.551 [2024-11-20 11:32:46.956561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.551 [2024-11-20 11:32:46.956578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:75064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.551 [2024-11-20 11:32:46.956592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.551 [2024-11-20 11:32:46.956609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:75072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.551 [2024-11-20 11:32:46.956638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.551 [2024-11-20 11:32:46.956658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:75080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.551 [2024-11-20 11:32:46.956673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.551 [2024-11-20 11:32:46.956690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:75088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.551 [2024-11-20 11:32:46.956704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.551 [2024-11-20 11:32:46.956721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:75096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.551 [2024-11-20 11:32:46.956736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.551 [2024-11-20 11:32:46.956753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:75104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.551 [2024-11-20 11:32:46.956767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.551 [2024-11-20 11:32:46.956784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:75112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.551 [2024-11-20 11:32:46.956798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.551 [2024-11-20 11:32:46.956814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:75120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.552 [2024-11-20 11:32:46.956829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.552 [2024-11-20 11:32:46.956845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:75128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.552 [2024-11-20 11:32:46.956860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.552 [2024-11-20 11:32:46.956883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:75136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.552 [2024-11-20 11:32:46.956926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.552 [2024-11-20 11:32:46.956958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:75144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.552 [2024-11-20 11:32:46.956979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.552 [2024-11-20 11:32:46.957001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:75152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.552 [2024-11-20 11:32:46.957021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.552 [2024-11-20 11:32:46.957043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:75160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.552 [2024-11-20 11:32:46.957064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.552 [2024-11-20 11:32:46.957086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:75168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.552 [2024-11-20 11:32:46.957107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.552 [2024-11-20 11:32:46.957134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:75176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.552 [2024-11-20 11:32:46.957155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.552 [2024-11-20 11:32:46.957179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:75184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.552 [2024-11-20 11:32:46.957199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.552 [2024-11-20 11:32:46.957222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:75192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.552 [2024-11-20 11:32:46.957241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.552 [2024-11-20 11:32:46.957264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:75200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.552 [2024-11-20 11:32:46.957283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.552 [2024-11-20 11:32:46.957308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.552 [2024-11-20 11:32:46.957328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.552 [2024-11-20 11:32:46.957350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:75216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.552 [2024-11-20 11:32:46.957371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.552 [2024-11-20 11:32:46.957393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:75224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.552 [2024-11-20 11:32:46.957414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.552 [2024-11-20 11:32:46.957436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:75232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.552 [2024-11-20 11:32:46.957456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.552 [2024-11-20 11:32:46.957479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:75240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.552 [2024-11-20 11:32:46.957514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.552 [2024-11-20 11:32:46.957535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.552 [2024-11-20 11:32:46.957549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.552 [2024-11-20 11:32:46.957566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:75256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.552 [2024-11-20 11:32:46.957580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.552 [2024-11-20 11:32:46.957624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:75264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.552 [2024-11-20 11:32:46.957644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.552 [2024-11-20 11:32:46.957664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:75272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.552 [2024-11-20 11:32:46.957680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.552 [2024-11-20 11:32:46.957697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.552 [2024-11-20 11:32:46.957711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.552 [2024-11-20 11:32:46.957728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:75288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.552 [2024-11-20 11:32:46.957742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.552 [2024-11-20 11:32:46.957759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:75296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.552 [2024-11-20 11:32:46.957774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.552 [2024-11-20 11:32:46.957790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:75304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.552 [2024-11-20 11:32:46.957805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.552 [2024-11-20 11:32:46.957823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:75312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.552 [2024-11-20 11:32:46.957838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.552 [2024-11-20 11:32:46.957855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:75320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.552 [2024-11-20 11:32:46.957869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.552 [2024-11-20 11:32:46.957885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:75328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.552 [2024-11-20 11:32:46.957900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.552 [2024-11-20 11:32:46.957917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.552 [2024-11-20 11:32:46.957931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.552 [2024-11-20 11:32:46.957957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:75344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.552 [2024-11-20 11:32:46.957973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.552 [2024-11-20 11:32:46.957990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:75352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.552 [2024-11-20 11:32:46.958005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.552 [2024-11-20 11:32:46.958021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:75360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.552 [2024-11-20 11:32:46.958036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.552 [2024-11-20 11:32:46.958068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:75368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.552 [2024-11-20 11:32:46.958082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.552 [2024-11-20 11:32:46.958099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:75376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.552 [2024-11-20 11:32:46.958113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.552 [2024-11-20 11:32:46.958129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:75384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.552 [2024-11-20 11:32:46.958143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.552 [2024-11-20 11:32:46.958159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:75392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.552 [2024-11-20 11:32:46.958173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.552 [2024-11-20 11:32:46.958192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:75400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.552 [2024-11-20 11:32:46.958207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.552 [2024-11-20 11:32:46.958223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:75408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.552 [2024-11-20 11:32:46.958237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.552 [2024-11-20 11:32:46.958253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:75416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.552 [2024-11-20 11:32:46.958267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.552 [2024-11-20 11:32:46.958283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:75424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.552 [2024-11-20 11:32:46.958297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.552 [2024-11-20 11:32:46.958313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:75432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.552 [2024-11-20 11:32:46.958344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.552 [2024-11-20 11:32:46.958368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:75440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.553 [2024-11-20 11:32:46.958403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.553 [2024-11-20 11:32:46.958430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:75448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.553 [2024-11-20 11:32:46.958451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.553 [2024-11-20 11:32:46.958479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:75456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.553 [2024-11-20 11:32:46.958501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.553 [2024-11-20 11:32:46.958528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:75464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.553 [2024-11-20 11:32:46.958552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.553 [2024-11-20 11:32:46.958574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:75472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.553 [2024-11-20 11:32:46.958595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.553 [2024-11-20 11:32:46.958617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:75480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.553 [2024-11-20 11:32:46.958638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.553 [2024-11-20 11:32:46.958676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:75488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.553 [2024-11-20 11:32:46.958698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.553 [2024-11-20 11:32:46.958724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:75496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.553 [2024-11-20 11:32:46.958749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.553 [2024-11-20 11:32:46.958773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:75504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.553 [2024-11-20 11:32:46.958794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.553 [2024-11-20 11:32:46.958817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:75512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.553 [2024-11-20 11:32:46.958837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.553 [2024-11-20 11:32:46.958859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:75520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.553 [2024-11-20 11:32:46.958880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.553 [2024-11-20 11:32:46.958905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:75528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.553 [2024-11-20 11:32:46.958927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.553 [2024-11-20 11:32:46.958949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.553 [2024-11-20 11:32:46.958969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.553 [2024-11-20 11:32:46.958992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.553 [2024-11-20 11:32:46.959025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.553 [2024-11-20 11:32:46.959048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:75552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.553 [2024-11-20 11:32:46.959068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.553 [2024-11-20 11:32:46.959090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:75560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.553 [2024-11-20 11:32:46.959111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.553 [2024-11-20 11:32:46.959133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:75568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.553 [2024-11-20 11:32:46.959153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.553 [2024-11-20 11:32:46.959176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:75576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.553 [2024-11-20 11:32:46.959196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.553 [2024-11-20 11:32:46.959219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:75584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.553 [2024-11-20 11:32:46.959241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.553 [2024-11-20 11:32:46.959267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:75592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.553 [2024-11-20 11:32:46.959283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.553 [2024-11-20 11:32:46.959300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:75600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.553 [2024-11-20 11:32:46.959315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.553 [2024-11-20 11:32:46.959331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:75608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.553 [2024-11-20 11:32:46.959347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.553 [2024-11-20 11:32:46.959364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:75616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.553 [2024-11-20 11:32:46.959379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.553 [2024-11-20 11:32:46.959396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:75624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.553 [2024-11-20 11:32:46.959411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.553 [2024-11-20 11:32:46.959428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:75632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.553 [2024-11-20 11:32:46.959443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.553 [2024-11-20 11:32:46.959459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:75640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.553 [2024-11-20 11:32:46.959474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.553 [2024-11-20 11:32:46.959498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:75648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.553 [2024-11-20 11:32:46.959514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.553 [2024-11-20 11:32:46.959532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:75656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.553 [2024-11-20 11:32:46.959547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.553 [2024-11-20 11:32:46.959563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:75664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.553 [2024-11-20 11:32:46.959579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.553 [2024-11-20 11:32:46.959595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:75672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.553 [2024-11-20 11:32:46.959633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.553 [2024-11-20 11:32:46.959653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:75680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.553 [2024-11-20 11:32:46.959668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.553 [2024-11-20 11:32:46.959684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:75688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.553 [2024-11-20 11:32:46.959699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.553 [2024-11-20 11:32:46.959716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:75696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.553 [2024-11-20 11:32:46.959730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.553 [2024-11-20 11:32:46.959747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:75704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.553 [2024-11-20 11:32:46.959762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.553 [2024-11-20 11:32:46.959778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:75712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.553 [2024-11-20 11:32:46.959793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.553 [2024-11-20 11:32:46.959809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:75720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.553 [2024-11-20 11:32:46.959824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.553 [2024-11-20 11:32:46.959841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:75728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.553 [2024-11-20 11:32:46.959855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.553 [2024-11-20 11:32:46.959872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:75736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.553 [2024-11-20 11:32:46.959886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.553 [2024-11-20 11:32:46.959904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:75744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.553 [2024-11-20 11:32:46.959937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.553 [2024-11-20 11:32:46.959961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:75752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.553 [2024-11-20 11:32:46.959986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.553 [2024-11-20 11:32:46.960011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:75760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.554 [2024-11-20 11:32:46.960032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.554 [2024-11-20 11:32:46.960053] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbaf670 is same with the state(6) to be set 00:20:15.554 [2024-11-20 11:32:46.960077] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.554 [2024-11-20 11:32:46.960092] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.554 [2024-11-20 11:32:46.960109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75768 len:8 PRP1 0x0 PRP2 0x0 00:20:15.554 [2024-11-20 11:32:46.960130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.554 [2024-11-20 11:32:46.960196] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:20:15.554 [2024-11-20 11:32:46.960275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:15.554 [2024-11-20 11:32:46.960304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.554 [2024-11-20 11:32:46.960326] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:15.554 [2024-11-20 11:32:46.960346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.554 [2024-11-20 11:32:46.960367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:15.554 [2024-11-20 11:32:46.960387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.554 [2024-11-20 11:32:46.960408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:15.554 [2024-11-20 11:32:46.960428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.554 [2024-11-20 11:32:46.960449] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:20:15.554 [2024-11-20 11:32:46.960503] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb42f30 (9): Bad file descriptor 00:20:15.554 [2024-11-20 11:32:46.964568] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:15.554 [2024-11-20 11:32:46.994022] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:20:15.554 7559.50 IOPS, 29.53 MiB/s [2024-11-20T11:33:01.143Z] 7550.00 IOPS, 29.49 MiB/s [2024-11-20T11:33:01.143Z] 7695.50 IOPS, 30.06 MiB/s [2024-11-20T11:33:01.143Z] 7618.80 IOPS, 29.76 MiB/s [2024-11-20T11:33:01.143Z] [2024-11-20 11:32:50.948560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:64256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.554 [2024-11-20 11:32:50.948640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.554 [2024-11-20 11:32:50.948678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:64264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.554 [2024-11-20 11:32:50.948723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.554 [2024-11-20 11:32:50.948743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:64272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.554 [2024-11-20 11:32:50.948759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.554 [2024-11-20 11:32:50.948777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:64280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.554 [2024-11-20 11:32:50.948792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.554 [2024-11-20 11:32:50.948809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:64288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.554 [2024-11-20 11:32:50.948824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.554 [2024-11-20 11:32:50.948840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:64296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.554 [2024-11-20 11:32:50.948855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.554 [2024-11-20 11:32:50.948871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:64304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.554 [2024-11-20 11:32:50.948886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.554 [2024-11-20 11:32:50.948908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:64312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.554 [2024-11-20 11:32:50.948922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.554 [2024-11-20 11:32:50.948939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:64320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.554 [2024-11-20 11:32:50.948954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.554 [2024-11-20 11:32:50.948970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:64328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.554 [2024-11-20 11:32:50.948985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.554 [2024-11-20 11:32:50.949001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:64336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.554 [2024-11-20 11:32:50.949016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.554 [2024-11-20 11:32:50.949033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:64344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.554 [2024-11-20 11:32:50.949048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.554 [2024-11-20 11:32:50.949075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:64352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.554 [2024-11-20 11:32:50.949102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.554 [2024-11-20 11:32:50.949130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:64360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.554 [2024-11-20 11:32:50.949156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.554 [2024-11-20 11:32:50.949200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:64368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.554 [2024-11-20 11:32:50.949227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.554 [2024-11-20 11:32:50.949255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:64376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.554 [2024-11-20 11:32:50.949282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.554 [2024-11-20 11:32:50.949313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:64384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.554 [2024-11-20 11:32:50.949338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.554 [2024-11-20 11:32:50.949363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:64392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.554 [2024-11-20 11:32:50.949387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.554 [2024-11-20 11:32:50.949413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:64400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.554 [2024-11-20 11:32:50.949437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.554 [2024-11-20 11:32:50.949465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:64408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.554 [2024-11-20 11:32:50.949489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.554 [2024-11-20 11:32:50.949515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:64416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.554 [2024-11-20 11:32:50.949539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.554 [2024-11-20 11:32:50.949565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:64424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.554 [2024-11-20 11:32:50.949586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.554 [2024-11-20 11:32:50.949650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:64432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.554 [2024-11-20 11:32:50.949674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.554 [2024-11-20 11:32:50.949697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:64440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.554 [2024-11-20 11:32:50.949718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.554 [2024-11-20 11:32:50.949741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:64448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.554 [2024-11-20 11:32:50.949761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.554 [2024-11-20 11:32:50.949784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:64456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.554 [2024-11-20 11:32:50.949806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.554 [2024-11-20 11:32:50.949828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:64464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.554 [2024-11-20 11:32:50.949865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.554 [2024-11-20 11:32:50.949890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:64472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.554 [2024-11-20 11:32:50.949911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.554 [2024-11-20 11:32:50.949936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:64480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.554 [2024-11-20 11:32:50.949960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.554 [2024-11-20 11:32:50.949984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:64488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.554 [2024-11-20 11:32:50.950007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.554 [2024-11-20 11:32:50.950033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:64496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.555 [2024-11-20 11:32:50.950056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.555 [2024-11-20 11:32:50.950081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:64504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.555 [2024-11-20 11:32:50.950104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.555 [2024-11-20 11:32:50.950129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:64512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.555 [2024-11-20 11:32:50.950152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.555 [2024-11-20 11:32:50.950177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:64520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.555 [2024-11-20 11:32:50.950200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.555 [2024-11-20 11:32:50.950226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:64528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.555 [2024-11-20 11:32:50.950248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.555 [2024-11-20 11:32:50.950274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:64536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.555 [2024-11-20 11:32:50.950297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.555 [2024-11-20 11:32:50.950323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:64544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.555 [2024-11-20 11:32:50.950346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.555 [2024-11-20 11:32:50.950371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:64552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.555 [2024-11-20 11:32:50.950394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.555 [2024-11-20 11:32:50.950420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:64560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.555 [2024-11-20 11:32:50.950444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.555 [2024-11-20 11:32:50.950469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:64568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.555 [2024-11-20 11:32:50.950504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.555 [2024-11-20 11:32:50.950531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:64576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.555 [2024-11-20 11:32:50.950554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.555 [2024-11-20 11:32:50.950580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:64584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.555 [2024-11-20 11:32:50.950628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.555 [2024-11-20 11:32:50.950659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:64592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.555 [2024-11-20 11:32:50.950683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.555 [2024-11-20 11:32:50.950708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:64600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.555 [2024-11-20 11:32:50.950731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.555 [2024-11-20 11:32:50.950756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:64608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.555 [2024-11-20 11:32:50.950779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.555 [2024-11-20 11:32:50.950804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:64616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.555 [2024-11-20 11:32:50.950828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.555 [2024-11-20 11:32:50.950853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.555 [2024-11-20 11:32:50.950877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.555 [2024-11-20 11:32:50.950902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:64632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.555 [2024-11-20 11:32:50.950925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.555 [2024-11-20 11:32:50.950950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:64640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.555 [2024-11-20 11:32:50.950973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.555 [2024-11-20 11:32:50.950999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:64648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.555 [2024-11-20 11:32:50.951024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.555 [2024-11-20 11:32:50.951052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:64656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.555 [2024-11-20 11:32:50.951077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.555 [2024-11-20 11:32:50.951106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:64664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.555 [2024-11-20 11:32:50.951134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.555 [2024-11-20 11:32:50.951181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:63944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.555 [2024-11-20 11:32:50.951207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.555 [2024-11-20 11:32:50.951233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:63952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.555 [2024-11-20 11:32:50.951259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.555 [2024-11-20 11:32:50.951286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:63960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.555 [2024-11-20 11:32:50.951309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.555 [2024-11-20 11:32:50.951336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:63968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.555 [2024-11-20 11:32:50.951360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.555 [2024-11-20 11:32:50.951385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:63976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.555 [2024-11-20 11:32:50.951408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.555 [2024-11-20 11:32:50.951434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:63984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.555 [2024-11-20 11:32:50.951459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.555 [2024-11-20 11:32:50.951489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:63992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.555 [2024-11-20 11:32:50.951514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.555 [2024-11-20 11:32:50.951540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:64000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.555 [2024-11-20 11:32:50.951566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.555 [2024-11-20 11:32:50.951594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:64008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.555 [2024-11-20 11:32:50.951637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.555 [2024-11-20 11:32:50.951668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:64016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.555 [2024-11-20 11:32:50.951693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.555 [2024-11-20 11:32:50.951719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:64024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.555 [2024-11-20 11:32:50.951745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.556 [2024-11-20 11:32:50.951772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:64032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.556 [2024-11-20 11:32:50.951798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.556 [2024-11-20 11:32:50.951825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:64040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.556 [2024-11-20 11:32:50.951863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.556 [2024-11-20 11:32:50.951892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:64048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.556 [2024-11-20 11:32:50.951918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.556 [2024-11-20 11:32:50.951947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:64056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.556 [2024-11-20 11:32:50.951973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.556 [2024-11-20 11:32:50.952000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:64064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.556 [2024-11-20 11:32:50.952025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.556 [2024-11-20 11:32:50.952055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:64072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.556 [2024-11-20 11:32:50.952081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.556 [2024-11-20 11:32:50.952110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:64080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.556 [2024-11-20 11:32:50.952137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.556 [2024-11-20 11:32:50.952164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:64088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.556 [2024-11-20 11:32:50.952190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.556 [2024-11-20 11:32:50.952217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:64096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.556 [2024-11-20 11:32:50.952242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.556 [2024-11-20 11:32:50.952269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:64104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.556 [2024-11-20 11:32:50.952296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.556 [2024-11-20 11:32:50.952324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:64112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.556 [2024-11-20 11:32:50.952349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.556 [2024-11-20 11:32:50.952376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:64120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.556 [2024-11-20 11:32:50.952401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.556 [2024-11-20 11:32:50.952428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:64672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.556 [2024-11-20 11:32:50.952454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.556 [2024-11-20 11:32:50.952480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:64680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.556 [2024-11-20 11:32:50.952504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.556 [2024-11-20 11:32:50.952547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:64688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.556 [2024-11-20 11:32:50.952572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.556 [2024-11-20 11:32:50.952600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:64696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.556 [2024-11-20 11:32:50.952645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.556 [2024-11-20 11:32:50.952674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:64704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.556 [2024-11-20 11:32:50.952699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.556 [2024-11-20 11:32:50.952727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:64712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.556 [2024-11-20 11:32:50.952751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.556 [2024-11-20 11:32:50.952778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:64720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.556 [2024-11-20 11:32:50.952801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.556 [2024-11-20 11:32:50.952829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:64728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.556 [2024-11-20 11:32:50.952853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.556 [2024-11-20 11:32:50.952880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:64736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.556 [2024-11-20 11:32:50.952906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.556 [2024-11-20 11:32:50.952935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:64744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.556 [2024-11-20 11:32:50.952959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.556 [2024-11-20 11:32:50.952985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:64752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.556 [2024-11-20 11:32:50.953009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.556 [2024-11-20 11:32:50.953036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:64760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.556 [2024-11-20 11:32:50.953062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.556 [2024-11-20 11:32:50.953090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:64768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.556 [2024-11-20 11:32:50.953116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.556 [2024-11-20 11:32:50.953175] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.556 [2024-11-20 11:32:50.953202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64776 len:8 PRP1 0x0 PRP2 0x0 00:20:15.556 [2024-11-20 11:32:50.953225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.556 [2024-11-20 11:32:50.953493] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.556 [2024-11-20 11:32:50.953538] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.556 [2024-11-20 11:32:50.953562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64784 len:8 PRP1 0x0 PRP2 0x0 00:20:15.556 [2024-11-20 11:32:50.953586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.556 [2024-11-20 11:32:50.953647] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.556 [2024-11-20 11:32:50.953668] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.556 [2024-11-20 11:32:50.953688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64792 len:8 PRP1 0x0 PRP2 0x0 00:20:15.556 [2024-11-20 11:32:50.953712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.556 [2024-11-20 11:32:50.953735] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.556 [2024-11-20 11:32:50.953753] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.556 [2024-11-20 11:32:50.953771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64800 len:8 PRP1 0x0 PRP2 0x0 00:20:15.556 [2024-11-20 11:32:50.953795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.556 [2024-11-20 11:32:50.953819] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.556 [2024-11-20 11:32:50.953838] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.556 [2024-11-20 11:32:50.953856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64808 len:8 PRP1 0x0 PRP2 0x0 00:20:15.556 [2024-11-20 11:32:50.953879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.556 [2024-11-20 11:32:50.953903] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.556 [2024-11-20 11:32:50.953922] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.556 [2024-11-20 11:32:50.953940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64816 len:8 PRP1 0x0 PRP2 0x0 00:20:15.556 [2024-11-20 11:32:50.953963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.556 [2024-11-20 11:32:50.953987] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.556 [2024-11-20 11:32:50.954004] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.556 [2024-11-20 11:32:50.954024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64824 len:8 PRP1 0x0 PRP2 0x0 00:20:15.556 [2024-11-20 11:32:50.954052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.556 [2024-11-20 11:32:50.954079] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.556 [2024-11-20 11:32:50.954100] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.556 [2024-11-20 11:32:50.954121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64832 len:8 PRP1 0x0 PRP2 0x0 00:20:15.556 [2024-11-20 11:32:50.954146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.556 [2024-11-20 11:32:50.954175] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.556 [2024-11-20 11:32:50.954195] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.556 [2024-11-20 11:32:50.954215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64840 len:8 PRP1 0x0 PRP2 0x0 00:20:15.556 [2024-11-20 11:32:50.954241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.557 [2024-11-20 11:32:50.954283] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.557 [2024-11-20 11:32:50.954304] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.557 [2024-11-20 11:32:50.954323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64848 len:8 PRP1 0x0 PRP2 0x0 00:20:15.557 [2024-11-20 11:32:50.954349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.557 [2024-11-20 11:32:50.954376] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.557 [2024-11-20 11:32:50.954394] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.557 [2024-11-20 11:32:50.954414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64856 len:8 PRP1 0x0 PRP2 0x0 00:20:15.557 [2024-11-20 11:32:50.954439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.557 [2024-11-20 11:32:50.954466] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.557 [2024-11-20 11:32:50.954485] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.557 [2024-11-20 11:32:50.954503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64864 len:8 PRP1 0x0 PRP2 0x0 00:20:15.557 [2024-11-20 11:32:50.954527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.557 [2024-11-20 11:32:50.954554] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.557 [2024-11-20 11:32:50.954573] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.557 [2024-11-20 11:32:50.954597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64872 len:8 PRP1 0x0 PRP2 0x0 00:20:15.557 [2024-11-20 11:32:50.954644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.557 [2024-11-20 11:32:50.954672] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.557 [2024-11-20 11:32:50.954693] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.557 [2024-11-20 11:32:50.954715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64880 len:8 PRP1 0x0 PRP2 0x0 00:20:15.557 [2024-11-20 11:32:50.954740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.557 [2024-11-20 11:32:50.954766] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.557 [2024-11-20 11:32:50.954784] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.557 [2024-11-20 11:32:50.954804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64888 len:8 PRP1 0x0 PRP2 0x0 00:20:15.557 [2024-11-20 11:32:50.954828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.557 [2024-11-20 11:32:50.954856] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.557 [2024-11-20 11:32:50.954876] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.557 [2024-11-20 11:32:50.954896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64896 len:8 PRP1 0x0 PRP2 0x0 00:20:15.557 [2024-11-20 11:32:50.954920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.557 [2024-11-20 11:32:50.954947] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.557 [2024-11-20 11:32:50.954966] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.557 [2024-11-20 11:32:50.954986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64904 len:8 PRP1 0x0 PRP2 0x0 00:20:15.557 [2024-11-20 11:32:50.955026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.557 [2024-11-20 11:32:50.955051] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.557 [2024-11-20 11:32:50.955071] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.557 [2024-11-20 11:32:50.955090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64912 len:8 PRP1 0x0 PRP2 0x0 00:20:15.557 [2024-11-20 11:32:50.955116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.557 [2024-11-20 11:32:50.955141] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.557 [2024-11-20 11:32:50.955159] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.557 [2024-11-20 11:32:50.955178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64920 len:8 PRP1 0x0 PRP2 0x0 00:20:15.557 [2024-11-20 11:32:50.955202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.557 [2024-11-20 11:32:50.955228] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.557 [2024-11-20 11:32:50.955247] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.557 [2024-11-20 11:32:50.955265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64928 len:8 PRP1 0x0 PRP2 0x0 00:20:15.557 [2024-11-20 11:32:50.955290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.557 [2024-11-20 11:32:50.955315] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.557 [2024-11-20 11:32:50.955334] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.557 [2024-11-20 11:32:50.955356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64936 len:8 PRP1 0x0 PRP2 0x0 00:20:15.557 [2024-11-20 11:32:50.955381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.557 [2024-11-20 11:32:50.955406] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.557 [2024-11-20 11:32:50.955425] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.557 [2024-11-20 11:32:50.955443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64944 len:8 PRP1 0x0 PRP2 0x0 00:20:15.557 [2024-11-20 11:32:50.955467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.557 [2024-11-20 11:32:50.955492] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.557 [2024-11-20 11:32:50.955511] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.557 [2024-11-20 11:32:50.955531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64952 len:8 PRP1 0x0 PRP2 0x0 00:20:15.557 [2024-11-20 11:32:50.955555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.557 [2024-11-20 11:32:50.955579] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.557 [2024-11-20 11:32:50.955598] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.557 [2024-11-20 11:32:50.955634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64960 len:8 PRP1 0x0 PRP2 0x0 00:20:15.557 [2024-11-20 11:32:50.955662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.557 [2024-11-20 11:32:50.955688] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.557 [2024-11-20 11:32:50.955708] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.557 [2024-11-20 11:32:50.955742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64128 len:8 PRP1 0x0 PRP2 0x0 00:20:15.557 [2024-11-20 11:32:50.955767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.557 [2024-11-20 11:32:50.955794] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.557 [2024-11-20 11:32:50.955813] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.557 [2024-11-20 11:32:50.955834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64136 len:8 PRP1 0x0 PRP2 0x0 00:20:15.557 [2024-11-20 11:32:50.955858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.557 [2024-11-20 11:32:50.955884] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.557 [2024-11-20 11:32:50.955905] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.557 [2024-11-20 11:32:50.955925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64144 len:8 PRP1 0x0 PRP2 0x0 00:20:15.557 [2024-11-20 11:32:50.955949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.557 [2024-11-20 11:32:50.955977] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.557 [2024-11-20 11:32:50.955998] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.557 [2024-11-20 11:32:50.956019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64152 len:8 PRP1 0x0 PRP2 0x0 00:20:15.557 [2024-11-20 11:32:50.956045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.557 [2024-11-20 11:32:50.956072] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.557 [2024-11-20 11:32:50.956091] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.557 [2024-11-20 11:32:50.956115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64160 len:8 PRP1 0x0 PRP2 0x0 00:20:15.557 [2024-11-20 11:32:50.956142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.557 [2024-11-20 11:32:50.956176] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.557 [2024-11-20 11:32:50.956198] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.557 [2024-11-20 11:32:50.956219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64168 len:8 PRP1 0x0 PRP2 0x0 00:20:15.557 [2024-11-20 11:32:50.956246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.557 [2024-11-20 11:32:50.956274] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.557 [2024-11-20 11:32:50.956295] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.557 [2024-11-20 11:32:50.956318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64176 len:8 PRP1 0x0 PRP2 0x0 00:20:15.557 [2024-11-20 11:32:50.956343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.557 [2024-11-20 11:32:50.956369] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.557 [2024-11-20 11:32:50.956388] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.557 [2024-11-20 11:32:50.956408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64184 len:8 PRP1 0x0 PRP2 0x0 00:20:15.557 [2024-11-20 11:32:50.956432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.557 [2024-11-20 11:32:50.956458] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.557 [2024-11-20 11:32:50.956491] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.558 [2024-11-20 11:32:50.956512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64192 len:8 PRP1 0x0 PRP2 0x0 00:20:15.558 [2024-11-20 11:32:50.956537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.558 [2024-11-20 11:32:50.956564] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.558 [2024-11-20 11:32:50.956583] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.558 [2024-11-20 11:32:50.956602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64200 len:8 PRP1 0x0 PRP2 0x0 00:20:15.558 [2024-11-20 11:32:50.956647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.558 [2024-11-20 11:32:50.956673] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.558 [2024-11-20 11:32:50.956691] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.558 [2024-11-20 11:32:50.956709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64208 len:8 PRP1 0x0 PRP2 0x0 00:20:15.558 [2024-11-20 11:32:50.956733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.558 [2024-11-20 11:32:50.956759] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.558 [2024-11-20 11:32:50.956777] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.558 [2024-11-20 11:32:50.956795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64216 len:8 PRP1 0x0 PRP2 0x0 00:20:15.558 [2024-11-20 11:32:50.956819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.558 [2024-11-20 11:32:50.956844] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.558 [2024-11-20 11:32:50.956862] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.558 [2024-11-20 11:32:50.956883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64224 len:8 PRP1 0x0 PRP2 0x0 00:20:15.558 [2024-11-20 11:32:50.956911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.558 [2024-11-20 11:32:50.956945] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.558 [2024-11-20 11:32:50.956966] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.558 [2024-11-20 11:32:50.956987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64232 len:8 PRP1 0x0 PRP2 0x0 00:20:15.558 [2024-11-20 11:32:50.957014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.558 [2024-11-20 11:32:50.957042] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.558 [2024-11-20 11:32:50.957065] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.558 [2024-11-20 11:32:50.957087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64240 len:8 PRP1 0x0 PRP2 0x0 00:20:15.558 [2024-11-20 11:32:50.957114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.558 [2024-11-20 11:32:50.957142] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.558 [2024-11-20 11:32:50.957164] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.558 [2024-11-20 11:32:50.957186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64248 len:8 PRP1 0x0 PRP2 0x0 00:20:15.558 [2024-11-20 11:32:50.957214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.558 [2024-11-20 11:32:50.957259] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.558 [2024-11-20 11:32:50.957281] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.558 [2024-11-20 11:32:50.957303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64256 len:8 PRP1 0x0 PRP2 0x0 00:20:15.558 [2024-11-20 11:32:50.957329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.558 [2024-11-20 11:32:50.957357] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.558 [2024-11-20 11:32:50.957378] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.558 [2024-11-20 11:32:50.957398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64264 len:8 PRP1 0x0 PRP2 0x0 00:20:15.558 [2024-11-20 11:32:50.957424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.558 [2024-11-20 11:32:50.957451] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.558 [2024-11-20 11:32:50.957472] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.558 [2024-11-20 11:32:50.957493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64272 len:8 PRP1 0x0 PRP2 0x0 00:20:15.558 [2024-11-20 11:32:50.957518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.558 [2024-11-20 11:32:50.957545] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.558 [2024-11-20 11:32:50.957566] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.558 [2024-11-20 11:32:50.957588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64280 len:8 PRP1 0x0 PRP2 0x0 00:20:15.558 [2024-11-20 11:32:50.957647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.558 [2024-11-20 11:32:50.957676] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.558 [2024-11-20 11:32:50.957697] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.558 [2024-11-20 11:32:50.957719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64288 len:8 PRP1 0x0 PRP2 0x0 00:20:15.558 [2024-11-20 11:32:50.957745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.558 [2024-11-20 11:32:50.957774] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.558 [2024-11-20 11:32:50.957795] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.558 [2024-11-20 11:32:50.957816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64296 len:8 PRP1 0x0 PRP2 0x0 00:20:15.558 [2024-11-20 11:32:50.957841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.558 [2024-11-20 11:32:50.957868] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.558 [2024-11-20 11:32:50.957888] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.558 [2024-11-20 11:32:50.957909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64304 len:8 PRP1 0x0 PRP2 0x0 00:20:15.558 [2024-11-20 11:32:50.957934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.558 [2024-11-20 11:32:50.957960] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.558 [2024-11-20 11:32:50.957979] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.558 [2024-11-20 11:32:50.958000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64312 len:8 PRP1 0x0 PRP2 0x0 00:20:15.558 [2024-11-20 11:32:50.958047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.558 [2024-11-20 11:32:50.958075] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.558 [2024-11-20 11:32:50.958095] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.558 [2024-11-20 11:32:50.958114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64320 len:8 PRP1 0x0 PRP2 0x0 00:20:15.558 [2024-11-20 11:32:50.958140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.558 [2024-11-20 11:32:50.958166] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.558 [2024-11-20 11:32:50.958186] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.558 [2024-11-20 11:32:50.958206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64328 len:8 PRP1 0x0 PRP2 0x0 00:20:15.558 [2024-11-20 11:32:50.958232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.558 [2024-11-20 11:32:50.958259] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.558 [2024-11-20 11:32:50.958278] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.558 [2024-11-20 11:32:50.958298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64336 len:8 PRP1 0x0 PRP2 0x0 00:20:15.558 [2024-11-20 11:32:50.958326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.558 [2024-11-20 11:32:50.958354] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.558 [2024-11-20 11:32:50.958375] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.558 [2024-11-20 11:32:50.958397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64344 len:8 PRP1 0x0 PRP2 0x0 00:20:15.558 [2024-11-20 11:32:50.958422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.558 [2024-11-20 11:32:50.958449] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.558 [2024-11-20 11:32:50.958468] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.558 [2024-11-20 11:32:50.958488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64352 len:8 PRP1 0x0 PRP2 0x0 00:20:15.558 [2024-11-20 11:32:50.958513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.558 [2024-11-20 11:32:50.958538] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.558 [2024-11-20 11:32:50.958556] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.558 [2024-11-20 11:32:50.958575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64360 len:8 PRP1 0x0 PRP2 0x0 00:20:15.558 [2024-11-20 11:32:50.958601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.558 [2024-11-20 11:32:50.958646] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.558 [2024-11-20 11:32:50.958667] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.558 [2024-11-20 11:32:50.958687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64368 len:8 PRP1 0x0 PRP2 0x0 00:20:15.558 [2024-11-20 11:32:50.958711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.558 [2024-11-20 11:32:50.958737] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.558 [2024-11-20 11:32:50.958755] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.558 [2024-11-20 11:32:50.958788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64376 len:8 PRP1 0x0 PRP2 0x0 00:20:15.559 [2024-11-20 11:32:50.958814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.559 [2024-11-20 11:32:50.958839] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.559 [2024-11-20 11:32:50.958858] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.559 [2024-11-20 11:32:50.958878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64384 len:8 PRP1 0x0 PRP2 0x0 00:20:15.559 [2024-11-20 11:32:50.958903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.559 [2024-11-20 11:32:50.958930] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.559 [2024-11-20 11:32:50.958950] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.559 [2024-11-20 11:32:50.958972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64392 len:8 PRP1 0x0 PRP2 0x0 00:20:15.559 [2024-11-20 11:32:50.958998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.559 [2024-11-20 11:32:50.959026] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.559 [2024-11-20 11:32:50.959046] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.559 [2024-11-20 11:32:50.959066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64400 len:8 PRP1 0x0 PRP2 0x0 00:20:15.559 [2024-11-20 11:32:50.959091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.559 [2024-11-20 11:32:50.959118] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.559 [2024-11-20 11:32:50.959138] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.559 [2024-11-20 11:32:50.959158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64408 len:8 PRP1 0x0 PRP2 0x0 00:20:15.559 [2024-11-20 11:32:50.959184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.559 [2024-11-20 11:32:50.959210] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.559 [2024-11-20 11:32:50.959231] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.559 [2024-11-20 11:32:50.959252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64416 len:8 PRP1 0x0 PRP2 0x0 00:20:15.559 [2024-11-20 11:32:50.959277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.559 [2024-11-20 11:32:50.959304] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.559 [2024-11-20 11:32:50.959323] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.559 [2024-11-20 11:32:50.959344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64424 len:8 PRP1 0x0 PRP2 0x0 00:20:15.559 [2024-11-20 11:32:50.959371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.559 [2024-11-20 11:32:50.959397] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.559 [2024-11-20 11:32:50.959416] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.559 [2024-11-20 11:32:50.959435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64432 len:8 PRP1 0x0 PRP2 0x0 00:20:15.559 [2024-11-20 11:32:50.959459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.559 [2024-11-20 11:32:50.959502] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.559 [2024-11-20 11:32:50.959523] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.559 [2024-11-20 11:32:50.959543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64440 len:8 PRP1 0x0 PRP2 0x0 00:20:15.559 [2024-11-20 11:32:50.959569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.559 [2024-11-20 11:32:50.959596] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.559 [2024-11-20 11:32:50.959636] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.559 [2024-11-20 11:32:50.959659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64448 len:8 PRP1 0x0 PRP2 0x0 00:20:15.559 [2024-11-20 11:32:50.959687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.559 [2024-11-20 11:32:50.959717] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.559 [2024-11-20 11:32:50.959738] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.559 [2024-11-20 11:32:50.959758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64456 len:8 PRP1 0x0 PRP2 0x0 00:20:15.559 [2024-11-20 11:32:50.959783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.559 [2024-11-20 11:32:50.959810] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.559 [2024-11-20 11:32:50.959829] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.559 [2024-11-20 11:32:50.959850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64464 len:8 PRP1 0x0 PRP2 0x0 00:20:15.559 [2024-11-20 11:32:50.959882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.559 [2024-11-20 11:32:50.959912] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.559 [2024-11-20 11:32:50.959934] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.559 [2024-11-20 11:32:50.959956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64472 len:8 PRP1 0x0 PRP2 0x0 00:20:15.559 [2024-11-20 11:32:50.959984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.559 [2024-11-20 11:32:50.960010] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.559 [2024-11-20 11:32:50.960030] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.559 [2024-11-20 11:32:50.960051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64480 len:8 PRP1 0x0 PRP2 0x0 00:20:15.559 [2024-11-20 11:32:50.960077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.559 [2024-11-20 11:32:50.960104] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.559 [2024-11-20 11:32:50.960124] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.559 [2024-11-20 11:32:50.960145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64488 len:8 PRP1 0x0 PRP2 0x0 00:20:15.559 [2024-11-20 11:32:50.960172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.559 [2024-11-20 11:32:50.960201] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.559 [2024-11-20 11:32:50.960223] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.559 [2024-11-20 11:32:50.960245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64496 len:8 PRP1 0x0 PRP2 0x0 00:20:15.559 [2024-11-20 11:32:50.960289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.559 [2024-11-20 11:32:50.960318] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.559 [2024-11-20 11:32:50.960339] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.559 [2024-11-20 11:32:50.960360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64504 len:8 PRP1 0x0 PRP2 0x0 00:20:15.559 [2024-11-20 11:32:50.960384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.559 [2024-11-20 11:32:50.960410] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.559 [2024-11-20 11:32:50.960429] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.559 [2024-11-20 11:32:50.960448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64512 len:8 PRP1 0x0 PRP2 0x0 00:20:15.559 [2024-11-20 11:32:50.960474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.559 [2024-11-20 11:32:50.960503] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.559 [2024-11-20 11:32:50.960530] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.559 [2024-11-20 11:32:50.960554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64520 len:8 PRP1 0x0 PRP2 0x0 00:20:15.559 [2024-11-20 11:32:50.960582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.559 [2024-11-20 11:32:50.960611] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.559 [2024-11-20 11:32:50.960651] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.559 [2024-11-20 11:32:50.960673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64528 len:8 PRP1 0x0 PRP2 0x0 00:20:15.559 [2024-11-20 11:32:50.960702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.559 [2024-11-20 11:32:50.960732] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.559 [2024-11-20 11:32:50.960752] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.559 [2024-11-20 11:32:50.960774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64536 len:8 PRP1 0x0 PRP2 0x0 00:20:15.559 [2024-11-20 11:32:50.960800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.559 [2024-11-20 11:32:50.960826] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.559 [2024-11-20 11:32:50.960846] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.560 [2024-11-20 11:32:50.960866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64544 len:8 PRP1 0x0 PRP2 0x0 00:20:15.560 [2024-11-20 11:32:50.960892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.560 [2024-11-20 11:32:50.960919] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.560 [2024-11-20 11:32:50.960939] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.560 [2024-11-20 11:32:50.960958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64552 len:8 PRP1 0x0 PRP2 0x0 00:20:15.560 [2024-11-20 11:32:50.960983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.560 [2024-11-20 11:32:50.961009] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.560 [2024-11-20 11:32:50.961029] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.560 [2024-11-20 11:32:50.961063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64560 len:8 PRP1 0x0 PRP2 0x0 00:20:15.560 [2024-11-20 11:32:50.961090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.560 [2024-11-20 11:32:50.961116] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.560 [2024-11-20 11:32:50.961136] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.560 [2024-11-20 11:32:50.961156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64568 len:8 PRP1 0x0 PRP2 0x0 00:20:15.560 [2024-11-20 11:32:50.961182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.560 [2024-11-20 11:32:50.961208] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.560 [2024-11-20 11:32:50.961227] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.560 [2024-11-20 11:32:50.961247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64576 len:8 PRP1 0x0 PRP2 0x0 00:20:15.560 [2024-11-20 11:32:50.961271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.560 [2024-11-20 11:32:50.961298] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.560 [2024-11-20 11:32:50.961318] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.560 [2024-11-20 11:32:50.961339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64584 len:8 PRP1 0x0 PRP2 0x0 00:20:15.560 [2024-11-20 11:32:50.961363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.560 [2024-11-20 11:32:50.961389] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.560 [2024-11-20 11:32:50.961408] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.560 [2024-11-20 11:32:50.961428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64592 len:8 PRP1 0x0 PRP2 0x0 00:20:15.560 [2024-11-20 11:32:50.961457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.560 [2024-11-20 11:32:50.961484] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.560 [2024-11-20 11:32:50.961505] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.560 [2024-11-20 11:32:50.961526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64600 len:8 PRP1 0x0 PRP2 0x0 00:20:15.560 [2024-11-20 11:32:50.961551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.560 [2024-11-20 11:32:50.961578] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.560 [2024-11-20 11:32:50.961627] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.560 [2024-11-20 11:32:50.961651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64608 len:8 PRP1 0x0 PRP2 0x0 00:20:15.560 [2024-11-20 11:32:50.961678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.560 [2024-11-20 11:32:50.961706] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.560 [2024-11-20 11:32:50.961727] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.560 [2024-11-20 11:32:50.961747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64616 len:8 PRP1 0x0 PRP2 0x0 00:20:15.560 [2024-11-20 11:32:50.961774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.560 [2024-11-20 11:32:50.961801] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.560 [2024-11-20 11:32:50.961834] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.560 [2024-11-20 11:32:50.961856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64624 len:8 PRP1 0x0 PRP2 0x0 00:20:15.560 [2024-11-20 11:32:50.961881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.560 [2024-11-20 11:32:50.961906] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.560 [2024-11-20 11:32:50.961926] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.560 [2024-11-20 11:32:50.961944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64632 len:8 PRP1 0x0 PRP2 0x0 00:20:15.560 [2024-11-20 11:32:50.961966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.560 [2024-11-20 11:32:50.961990] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.560 [2024-11-20 11:32:50.962007] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.560 [2024-11-20 11:32:50.962025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64640 len:8 PRP1 0x0 PRP2 0x0 00:20:15.560 [2024-11-20 11:32:50.962047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.560 [2024-11-20 11:32:50.962070] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.560 [2024-11-20 11:32:50.962087] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.560 [2024-11-20 11:32:50.962105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64648 len:8 PRP1 0x0 PRP2 0x0 00:20:15.560 [2024-11-20 11:32:50.962126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.560 [2024-11-20 11:32:50.962150] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.560 [2024-11-20 11:32:50.962168] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.560 [2024-11-20 11:32:50.962187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64656 len:8 PRP1 0x0 PRP2 0x0 00:20:15.560 [2024-11-20 11:32:50.962213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.560 [2024-11-20 11:32:50.962238] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.560 [2024-11-20 11:32:50.962256] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.560 [2024-11-20 11:32:50.962274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64664 len:8 PRP1 0x0 PRP2 0x0 00:20:15.560 [2024-11-20 11:32:50.962296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.560 [2024-11-20 11:32:50.962319] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.560 [2024-11-20 11:32:50.962336] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.560 [2024-11-20 11:32:50.962354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63944 len:8 PRP1 0x0 PRP2 0x0 00:20:15.560 [2024-11-20 11:32:50.962377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.560 [2024-11-20 11:32:50.962401] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.560 [2024-11-20 11:32:50.962419] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.560 [2024-11-20 11:32:50.962436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63952 len:8 PRP1 0x0 PRP2 0x0 00:20:15.560 [2024-11-20 11:32:50.962459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.560 [2024-11-20 11:32:50.962497] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.560 [2024-11-20 11:32:50.962518] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.560 [2024-11-20 11:32:50.962537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63960 len:8 PRP1 0x0 PRP2 0x0 00:20:15.560 [2024-11-20 11:32:50.962561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.560 [2024-11-20 11:32:50.962586] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.560 [2024-11-20 11:32:50.962605] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.560 [2024-11-20 11:32:50.962642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63968 len:8 PRP1 0x0 PRP2 0x0 00:20:15.560 [2024-11-20 11:32:50.962667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.560 [2024-11-20 11:32:50.962692] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.560 [2024-11-20 11:32:50.962711] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.560 [2024-11-20 11:32:50.976190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63976 len:8 PRP1 0x0 PRP2 0x0 00:20:15.560 [2024-11-20 11:32:50.976237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.560 [2024-11-20 11:32:50.976275] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.560 [2024-11-20 11:32:50.976298] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.560 [2024-11-20 11:32:50.976313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63984 len:8 PRP1 0x0 PRP2 0x0 00:20:15.560 [2024-11-20 11:32:50.976338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.560 [2024-11-20 11:32:50.976363] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.560 [2024-11-20 11:32:50.976376] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.560 [2024-11-20 11:32:50.976387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63992 len:8 PRP1 0x0 PRP2 0x0 00:20:15.560 [2024-11-20 11:32:50.976403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.560 [2024-11-20 11:32:50.976418] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.560 [2024-11-20 11:32:50.976429] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.560 [2024-11-20 11:32:50.976445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64000 len:8 PRP1 0x0 PRP2 0x0 00:20:15.560 [2024-11-20 11:32:50.976463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.561 [2024-11-20 11:32:50.976488] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.561 [2024-11-20 11:32:50.976509] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.561 [2024-11-20 11:32:50.976532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64008 len:8 PRP1 0x0 PRP2 0x0 00:20:15.561 [2024-11-20 11:32:50.976547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.561 [2024-11-20 11:32:50.976568] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.561 [2024-11-20 11:32:50.976588] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.561 [2024-11-20 11:32:50.976601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64016 len:8 PRP1 0x0 PRP2 0x0 00:20:15.561 [2024-11-20 11:32:50.976665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.561 [2024-11-20 11:32:50.976685] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.561 [2024-11-20 11:32:50.976697] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.561 [2024-11-20 11:32:50.976708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64024 len:8 PRP1 0x0 PRP2 0x0 00:20:15.561 [2024-11-20 11:32:50.976722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.561 [2024-11-20 11:32:50.976737] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.561 [2024-11-20 11:32:50.976748] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.561 [2024-11-20 11:32:50.976760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64032 len:8 PRP1 0x0 PRP2 0x0 00:20:15.561 [2024-11-20 11:32:50.976773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.561 [2024-11-20 11:32:50.976788] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.561 [2024-11-20 11:32:50.976799] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.561 [2024-11-20 11:32:50.976811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64040 len:8 PRP1 0x0 PRP2 0x0 00:20:15.561 [2024-11-20 11:32:50.976825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.561 [2024-11-20 11:32:50.976840] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.561 [2024-11-20 11:32:50.976850] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.561 [2024-11-20 11:32:50.976862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64048 len:8 PRP1 0x0 PRP2 0x0 00:20:15.561 [2024-11-20 11:32:50.976876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.561 [2024-11-20 11:32:50.976891] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.561 [2024-11-20 11:32:50.976902] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.561 [2024-11-20 11:32:50.976913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64056 len:8 PRP1 0x0 PRP2 0x0 00:20:15.561 [2024-11-20 11:32:50.976927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.561 [2024-11-20 11:32:50.976942] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.561 [2024-11-20 11:32:50.976953] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.561 [2024-11-20 11:32:50.976964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64064 len:8 PRP1 0x0 PRP2 0x0 00:20:15.561 [2024-11-20 11:32:50.976978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.561 [2024-11-20 11:32:50.976993] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.561 [2024-11-20 11:32:50.977004] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.561 [2024-11-20 11:32:50.977015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64072 len:8 PRP1 0x0 PRP2 0x0 00:20:15.561 [2024-11-20 11:32:50.977029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.561 [2024-11-20 11:32:50.977044] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.561 [2024-11-20 11:32:50.977054] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.561 [2024-11-20 11:32:50.977074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64080 len:8 PRP1 0x0 PRP2 0x0 00:20:15.561 [2024-11-20 11:32:50.977089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.561 [2024-11-20 11:32:50.977104] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.561 [2024-11-20 11:32:50.977115] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.561 [2024-11-20 11:32:50.977126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64088 len:8 PRP1 0x0 PRP2 0x0 00:20:15.561 [2024-11-20 11:32:50.977141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.561 [2024-11-20 11:32:50.977155] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.561 [2024-11-20 11:32:50.977166] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.561 [2024-11-20 11:32:50.977178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64096 len:8 PRP1 0x0 PRP2 0x0 00:20:15.561 [2024-11-20 11:32:50.977197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.561 [2024-11-20 11:32:50.977219] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.561 [2024-11-20 11:32:50.977234] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.561 [2024-11-20 11:32:50.977249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64104 len:8 PRP1 0x0 PRP2 0x0 00:20:15.561 [2024-11-20 11:32:50.977269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.561 [2024-11-20 11:32:50.977289] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.561 [2024-11-20 11:32:50.977303] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.561 [2024-11-20 11:32:50.977319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64112 len:8 PRP1 0x0 PRP2 0x0 00:20:15.561 [2024-11-20 11:32:50.977339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.561 [2024-11-20 11:32:50.977360] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.561 [2024-11-20 11:32:50.977375] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.561 [2024-11-20 11:32:50.977396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64120 len:8 PRP1 0x0 PRP2 0x0 00:20:15.561 [2024-11-20 11:32:50.977419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.561 [2024-11-20 11:32:50.977440] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.561 [2024-11-20 11:32:50.977455] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.561 [2024-11-20 11:32:50.977470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64672 len:8 PRP1 0x0 PRP2 0x0 00:20:15.561 [2024-11-20 11:32:50.977490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.561 [2024-11-20 11:32:50.977510] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.561 [2024-11-20 11:32:50.977525] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.561 [2024-11-20 11:32:50.977540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64680 len:8 PRP1 0x0 PRP2 0x0 00:20:15.561 [2024-11-20 11:32:50.977560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.561 [2024-11-20 11:32:50.977608] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.561 [2024-11-20 11:32:50.977647] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.561 [2024-11-20 11:32:50.977664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64688 len:8 PRP1 0x0 PRP2 0x0 00:20:15.561 [2024-11-20 11:32:50.977691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.561 [2024-11-20 11:32:50.977712] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.561 [2024-11-20 11:32:50.977727] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.561 [2024-11-20 11:32:50.977743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64696 len:8 PRP1 0x0 PRP2 0x0 00:20:15.561 [2024-11-20 11:32:50.977762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.561 [2024-11-20 11:32:50.977783] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.561 [2024-11-20 11:32:50.977798] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.561 [2024-11-20 11:32:50.977814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64704 len:8 PRP1 0x0 PRP2 0x0 00:20:15.561 [2024-11-20 11:32:50.977833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.561 [2024-11-20 11:32:50.977854] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.561 [2024-11-20 11:32:50.977868] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.561 [2024-11-20 11:32:50.977884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64712 len:8 PRP1 0x0 PRP2 0x0 00:20:15.561 [2024-11-20 11:32:50.977903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.561 [2024-11-20 11:32:50.977924] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.561 [2024-11-20 11:32:50.977938] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.561 [2024-11-20 11:32:50.977953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64720 len:8 PRP1 0x0 PRP2 0x0 00:20:15.561 [2024-11-20 11:32:50.977973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.561 [2024-11-20 11:32:50.977993] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.561 [2024-11-20 11:32:50.978009] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.561 [2024-11-20 11:32:50.978029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64728 len:8 PRP1 0x0 PRP2 0x0 00:20:15.561 [2024-11-20 11:32:50.978050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.561 [2024-11-20 11:32:50.978071] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.561 [2024-11-20 11:32:50.978086] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.562 [2024-11-20 11:32:50.978102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64736 len:8 PRP1 0x0 PRP2 0x0 00:20:15.562 [2024-11-20 11:32:50.978123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.562 [2024-11-20 11:32:50.978149] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.562 [2024-11-20 11:32:50.978162] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.562 [2024-11-20 11:32:50.978174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64744 len:8 PRP1 0x0 PRP2 0x0 00:20:15.562 [2024-11-20 11:32:50.978197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.562 [2024-11-20 11:32:50.978214] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.562 [2024-11-20 11:32:50.978225] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.562 [2024-11-20 11:32:50.978237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64752 len:8 PRP1 0x0 PRP2 0x0 00:20:15.562 [2024-11-20 11:32:50.978251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.562 [2024-11-20 11:32:50.978265] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.562 [2024-11-20 11:32:50.978276] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.562 [2024-11-20 11:32:50.978288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64760 len:8 PRP1 0x0 PRP2 0x0 00:20:15.562 [2024-11-20 11:32:50.978302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.562 [2024-11-20 11:32:50.978316] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.562 [2024-11-20 11:32:50.978327] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.562 [2024-11-20 11:32:50.978339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64768 len:8 PRP1 0x0 PRP2 0x0 00:20:15.562 [2024-11-20 11:32:50.978353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.562 [2024-11-20 11:32:50.978368] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.562 [2024-11-20 11:32:50.978379] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.562 [2024-11-20 11:32:50.978391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64776 len:8 PRP1 0x0 PRP2 0x0 00:20:15.562 [2024-11-20 11:32:50.978405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.562 [2024-11-20 11:32:50.978477] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:20:15.562 [2024-11-20 11:32:50.978567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:15.562 [2024-11-20 11:32:50.978591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.562 [2024-11-20 11:32:50.978610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:15.562 [2024-11-20 11:32:50.978642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.562 [2024-11-20 11:32:50.978674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:15.562 [2024-11-20 11:32:50.978702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.562 [2024-11-20 11:32:50.978718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:15.562 [2024-11-20 11:32:50.978732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.562 [2024-11-20 11:32:50.978748] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:20:15.562 [2024-11-20 11:32:50.978799] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb42f30 (9): Bad file descriptor 00:20:15.562 [2024-11-20 11:32:50.982923] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:20:15.562 [2024-11-20 11:32:51.006976] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:20:15.562 7687.17 IOPS, 30.03 MiB/s [2024-11-20T11:33:01.151Z] 7813.00 IOPS, 30.52 MiB/s [2024-11-20T11:33:01.151Z] 7912.88 IOPS, 30.91 MiB/s [2024-11-20T11:33:01.151Z] 7967.89 IOPS, 31.12 MiB/s [2024-11-20T11:33:01.151Z] [2024-11-20 11:32:55.537010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:114120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.562 [2024-11-20 11:32:55.537068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.562 [2024-11-20 11:32:55.537097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:114128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.562 [2024-11-20 11:32:55.537115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.562 [2024-11-20 11:32:55.537132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:114136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.562 [2024-11-20 11:32:55.537148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.562 [2024-11-20 11:32:55.537165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:114144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.562 [2024-11-20 11:32:55.537180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.562 [2024-11-20 11:32:55.537196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:114152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.562 [2024-11-20 11:32:55.537211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.562 [2024-11-20 11:32:55.537227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:114160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.562 [2024-11-20 11:32:55.537242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.562 [2024-11-20 11:32:55.537259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:114168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.562 [2024-11-20 11:32:55.537274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.562 [2024-11-20 11:32:55.537290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:114176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.562 [2024-11-20 11:32:55.537305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.562 [2024-11-20 11:32:55.537322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:114184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.562 [2024-11-20 11:32:55.537337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.562 [2024-11-20 11:32:55.537353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:114192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.562 [2024-11-20 11:32:55.537367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.562 [2024-11-20 11:32:55.537384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:114200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.562 [2024-11-20 11:32:55.537398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.562 [2024-11-20 11:32:55.537415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:114208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.562 [2024-11-20 11:32:55.537430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.562 [2024-11-20 11:32:55.537476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:114216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.562 [2024-11-20 11:32:55.537493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.562 [2024-11-20 11:32:55.537509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:114224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.562 [2024-11-20 11:32:55.537524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.562 [2024-11-20 11:32:55.537541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:114232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.562 [2024-11-20 11:32:55.537556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.562 [2024-11-20 11:32:55.537572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:114240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.562 [2024-11-20 11:32:55.537587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.562 [2024-11-20 11:32:55.537646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:114248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.562 [2024-11-20 11:32:55.537665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.562 [2024-11-20 11:32:55.537683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:114256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.562 [2024-11-20 11:32:55.537697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.562 [2024-11-20 11:32:55.537714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:114264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.562 [2024-11-20 11:32:55.537729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.562 [2024-11-20 11:32:55.537746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:114272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.562 [2024-11-20 11:32:55.537760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.562 [2024-11-20 11:32:55.537777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.562 [2024-11-20 11:32:55.537791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.562 [2024-11-20 11:32:55.537808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:114288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.562 [2024-11-20 11:32:55.537823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.562 [2024-11-20 11:32:55.537839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:114296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.562 [2024-11-20 11:32:55.537854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.562 [2024-11-20 11:32:55.537871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:114304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.563 [2024-11-20 11:32:55.537887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.563 [2024-11-20 11:32:55.537903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:114312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.563 [2024-11-20 11:32:55.537941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.563 [2024-11-20 11:32:55.537961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:113352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.563 [2024-11-20 11:32:55.537977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.563 [2024-11-20 11:32:55.537993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:113360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.563 [2024-11-20 11:32:55.538008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.563 [2024-11-20 11:32:55.538025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:113368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.563 [2024-11-20 11:32:55.538040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.563 [2024-11-20 11:32:55.538056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:113376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.563 [2024-11-20 11:32:55.538071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.563 [2024-11-20 11:32:55.538087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:113384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.563 [2024-11-20 11:32:55.538102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.563 [2024-11-20 11:32:55.538118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:113392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.563 [2024-11-20 11:32:55.538143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.563 [2024-11-20 11:32:55.538160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:113400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.563 [2024-11-20 11:32:55.538174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.563 [2024-11-20 11:32:55.538191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:113408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.563 [2024-11-20 11:32:55.538206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.563 [2024-11-20 11:32:55.538222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:113416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.563 [2024-11-20 11:32:55.538237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.563 [2024-11-20 11:32:55.538253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:113424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.563 [2024-11-20 11:32:55.538268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.563 [2024-11-20 11:32:55.538284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:113432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.563 [2024-11-20 11:32:55.538299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.563 [2024-11-20 11:32:55.538315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:113440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.563 [2024-11-20 11:32:55.538330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.563 [2024-11-20 11:32:55.538355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:113448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.563 [2024-11-20 11:32:55.538370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.563 [2024-11-20 11:32:55.538387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:113456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.563 [2024-11-20 11:32:55.538402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.563 [2024-11-20 11:32:55.538419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:113464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.563 [2024-11-20 11:32:55.538434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.563 [2024-11-20 11:32:55.538450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:113472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.563 [2024-11-20 11:32:55.538465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.563 [2024-11-20 11:32:55.538481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:113480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.563 [2024-11-20 11:32:55.538496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.563 [2024-11-20 11:32:55.538512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:113488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.563 [2024-11-20 11:32:55.538527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.563 [2024-11-20 11:32:55.538543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:113496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.563 [2024-11-20 11:32:55.538558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.563 [2024-11-20 11:32:55.538574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:113504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.563 [2024-11-20 11:32:55.538589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.563 [2024-11-20 11:32:55.538605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:113512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.563 [2024-11-20 11:32:55.538636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.563 [2024-11-20 11:32:55.538655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:113520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.563 [2024-11-20 11:32:55.538670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.563 [2024-11-20 11:32:55.538688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:113528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.563 [2024-11-20 11:32:55.538703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.563 [2024-11-20 11:32:55.538720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:113536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.563 [2024-11-20 11:32:55.538735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.563 [2024-11-20 11:32:55.538751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:113544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.563 [2024-11-20 11:32:55.538775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.563 [2024-11-20 11:32:55.538792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:113552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.563 [2024-11-20 11:32:55.538807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.563 [2024-11-20 11:32:55.538824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:113560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.563 [2024-11-20 11:32:55.538839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.563 [2024-11-20 11:32:55.538855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:113568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.563 [2024-11-20 11:32:55.538870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.563 [2024-11-20 11:32:55.538886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:113576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.563 [2024-11-20 11:32:55.538901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.563 [2024-11-20 11:32:55.538917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:113584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.563 [2024-11-20 11:32:55.538932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.563 [2024-11-20 11:32:55.538949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:113592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.563 [2024-11-20 11:32:55.538964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.563 [2024-11-20 11:32:55.538981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:113600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.563 [2024-11-20 11:32:55.538996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.563 [2024-11-20 11:32:55.539012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:113608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.563 [2024-11-20 11:32:55.539027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.563 [2024-11-20 11:32:55.539044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:113616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.563 [2024-11-20 11:32:55.539058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.563 [2024-11-20 11:32:55.539074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:113624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.563 [2024-11-20 11:32:55.539089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.563 [2024-11-20 11:32:55.539105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:113632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.563 [2024-11-20 11:32:55.539121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.563 [2024-11-20 11:32:55.539148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:113640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.563 [2024-11-20 11:32:55.539165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.563 [2024-11-20 11:32:55.539190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:113648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.563 [2024-11-20 11:32:55.539205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.563 [2024-11-20 11:32:55.539222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:113656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.564 [2024-11-20 11:32:55.539237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.564 [2024-11-20 11:32:55.539254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:113664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.564 [2024-11-20 11:32:55.539269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.564 [2024-11-20 11:32:55.539285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:113672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.564 [2024-11-20 11:32:55.539300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.564 [2024-11-20 11:32:55.539317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:113680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.564 [2024-11-20 11:32:55.539331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.564 [2024-11-20 11:32:55.539348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:113688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.564 [2024-11-20 11:32:55.539362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.564 [2024-11-20 11:32:55.539379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:113696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.564 [2024-11-20 11:32:55.539395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.564 [2024-11-20 11:32:55.539412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:113704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.564 [2024-11-20 11:32:55.539426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.564 [2024-11-20 11:32:55.539443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:113712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.564 [2024-11-20 11:32:55.539457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.564 [2024-11-20 11:32:55.539474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:113720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.564 [2024-11-20 11:32:55.539490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.564 [2024-11-20 11:32:55.539507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:113728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.564 [2024-11-20 11:32:55.539521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.564 [2024-11-20 11:32:55.539538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:113736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.564 [2024-11-20 11:32:55.539553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.564 [2024-11-20 11:32:55.539569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:113744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.564 [2024-11-20 11:32:55.539591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.564 [2024-11-20 11:32:55.539608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:113752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.564 [2024-11-20 11:32:55.539637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.564 [2024-11-20 11:32:55.539656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:113760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.564 [2024-11-20 11:32:55.539671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.564 [2024-11-20 11:32:55.539688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:113768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.564 [2024-11-20 11:32:55.539702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.564 [2024-11-20 11:32:55.539719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:113776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.564 [2024-11-20 11:32:55.539734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.564 [2024-11-20 11:32:55.539750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:113784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.564 [2024-11-20 11:32:55.539766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.564 [2024-11-20 11:32:55.539783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:113792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.564 [2024-11-20 11:32:55.539798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.564 [2024-11-20 11:32:55.539814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:113800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.564 [2024-11-20 11:32:55.539829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.564 [2024-11-20 11:32:55.539845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:113808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.564 [2024-11-20 11:32:55.539860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.564 [2024-11-20 11:32:55.539877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:113816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.564 [2024-11-20 11:32:55.539891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.564 [2024-11-20 11:32:55.539909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:113824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.564 [2024-11-20 11:32:55.539924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.564 [2024-11-20 11:32:55.539940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:113832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.564 [2024-11-20 11:32:55.539955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.564 [2024-11-20 11:32:55.539972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:113840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.564 [2024-11-20 11:32:55.539987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.564 [2024-11-20 11:32:55.540014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:113848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.564 [2024-11-20 11:32:55.540030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.564 [2024-11-20 11:32:55.540047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:113856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.564 [2024-11-20 11:32:55.540061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.564 [2024-11-20 11:32:55.540079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:113864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.564 [2024-11-20 11:32:55.540094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.564 [2024-11-20 11:32:55.540110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:113872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.564 [2024-11-20 11:32:55.540125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.564 [2024-11-20 11:32:55.540142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:113880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.564 [2024-11-20 11:32:55.540157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.564 [2024-11-20 11:32:55.540173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:113888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.564 [2024-11-20 11:32:55.540188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.564 [2024-11-20 11:32:55.540204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:113896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.564 [2024-11-20 11:32:55.540219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.564 [2024-11-20 11:32:55.540235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:113904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.564 [2024-11-20 11:32:55.540250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.564 [2024-11-20 11:32:55.540267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:113912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.564 [2024-11-20 11:32:55.540282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.564 [2024-11-20 11:32:55.540298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:113920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.564 [2024-11-20 11:32:55.540313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.564 [2024-11-20 11:32:55.540330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:113928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.564 [2024-11-20 11:32:55.540351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.564 [2024-11-20 11:32:55.540377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:113936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.565 [2024-11-20 11:32:55.540393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.565 [2024-11-20 11:32:55.540409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:113944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.565 [2024-11-20 11:32:55.540432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.565 [2024-11-20 11:32:55.540450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:113952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.565 [2024-11-20 11:32:55.540465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.565 [2024-11-20 11:32:55.540482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:113960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.565 [2024-11-20 11:32:55.540496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.565 [2024-11-20 11:32:55.540513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:113968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.565 [2024-11-20 11:32:55.540528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.565 [2024-11-20 11:32:55.540545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:113976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.565 [2024-11-20 11:32:55.540559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.565 [2024-11-20 11:32:55.540576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:113984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.565 [2024-11-20 11:32:55.540590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.565 [2024-11-20 11:32:55.540607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:113992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.565 [2024-11-20 11:32:55.540635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.565 [2024-11-20 11:32:55.540653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:114000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.565 [2024-11-20 11:32:55.540668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.565 [2024-11-20 11:32:55.540684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:114008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.565 [2024-11-20 11:32:55.540699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.565 [2024-11-20 11:32:55.540716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:114016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.565 [2024-11-20 11:32:55.540730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.565 [2024-11-20 11:32:55.540747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:114024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.565 [2024-11-20 11:32:55.540762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.565 [2024-11-20 11:32:55.540779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:114032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.565 [2024-11-20 11:32:55.540794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.565 [2024-11-20 11:32:55.540811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:114040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.565 [2024-11-20 11:32:55.540825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.565 [2024-11-20 11:32:55.540842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:114048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.565 [2024-11-20 11:32:55.540864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.565 [2024-11-20 11:32:55.540881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:114056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.565 [2024-11-20 11:32:55.540896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.565 [2024-11-20 11:32:55.540912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:114064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.565 [2024-11-20 11:32:55.540927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.565 [2024-11-20 11:32:55.540944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:114072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.565 [2024-11-20 11:32:55.540958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.565 [2024-11-20 11:32:55.540976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:114080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.565 [2024-11-20 11:32:55.540990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.565 [2024-11-20 11:32:55.541007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:114088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.565 [2024-11-20 11:32:55.541022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.565 [2024-11-20 11:32:55.541039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:114096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.565 [2024-11-20 11:32:55.541054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.565 [2024-11-20 11:32:55.541071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:114104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.565 [2024-11-20 11:32:55.541086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.565 [2024-11-20 11:32:55.541103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:114112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.565 [2024-11-20 11:32:55.541118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.565 [2024-11-20 11:32:55.541134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:114320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.565 [2024-11-20 11:32:55.541149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.565 [2024-11-20 11:32:55.541165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:114328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.565 [2024-11-20 11:32:55.541180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.565 [2024-11-20 11:32:55.541197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:114336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.565 [2024-11-20 11:32:55.541212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.565 [2024-11-20 11:32:55.541228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:114344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.565 [2024-11-20 11:32:55.541243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.565 [2024-11-20 11:32:55.541265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:114352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.565 [2024-11-20 11:32:55.541281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.565 [2024-11-20 11:32:55.541298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:114360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.565 [2024-11-20 11:32:55.541313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.565 [2024-11-20 11:32:55.541346] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.565 [2024-11-20 11:32:55.541362] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.565 [2024-11-20 11:32:55.541374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114368 len:8 PRP1 0x0 PRP2 0x0 00:20:15.565 [2024-11-20 11:32:55.541389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.565 [2024-11-20 11:32:55.541444] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:20:15.565 [2024-11-20 11:32:55.541505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:15.565 [2024-11-20 11:32:55.541528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.565 [2024-11-20 11:32:55.541545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:15.565 [2024-11-20 11:32:55.541561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.565 [2024-11-20 11:32:55.541576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:15.565 [2024-11-20 11:32:55.541591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.565 [2024-11-20 11:32:55.541652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:15.565 [2024-11-20 11:32:55.541671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.565 [2024-11-20 11:32:55.541686] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:20:15.565 [2024-11-20 11:32:55.541741] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb42f30 (9): Bad file descriptor 00:20:15.565 [2024-11-20 11:32:55.545729] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:20:15.565 [2024-11-20 11:32:55.575501] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:20:15.565 7913.90 IOPS, 30.91 MiB/s [2024-11-20T11:33:01.154Z] 7994.18 IOPS, 31.23 MiB/s [2024-11-20T11:33:01.154Z] 8014.08 IOPS, 31.31 MiB/s [2024-11-20T11:33:01.154Z] 7998.92 IOPS, 31.25 MiB/s [2024-11-20T11:33:01.154Z] 8028.64 IOPS, 31.36 MiB/s 00:20:15.565 Latency(us) 00:20:15.565 [2024-11-20T11:33:01.154Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:15.565 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:15.565 Verification LBA range: start 0x0 length 0x4000 00:20:15.565 NVMe0n1 : 15.01 8023.67 31.34 219.41 0.00 15493.03 621.85 37176.79 00:20:15.565 [2024-11-20T11:33:01.154Z] =================================================================================================================== 00:20:15.565 [2024-11-20T11:33:01.155Z] Total : 8023.67 31.34 219.41 0.00 15493.03 621.85 37176.79 00:20:15.566 Received shutdown signal, test time was about 15.000000 seconds 00:20:15.566 00:20:15.566 Latency(us) 00:20:15.566 [2024-11-20T11:33:01.155Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:15.566 [2024-11-20T11:33:01.155Z] =================================================================================================================== 00:20:15.566 [2024-11-20T11:33:01.155Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:15.566 11:33:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:20:15.566 11:33:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:20:15.566 11:33:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:20:15.566 11:33:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=88468 00:20:15.566 11:33:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 88468 /var/tmp/bdevperf.sock 00:20:15.566 11:33:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:20:15.566 11:33:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 88468 ']' 00:20:15.566 11:33:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:15.566 11:33:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:15.566 11:33:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:15.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:15.566 11:33:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:15.566 11:33:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:15.825 11:33:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:15.825 11:33:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:20:15.825 11:33:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:20:16.083 [2024-11-20 11:33:01.603173] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:20:16.083 11:33:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:20:16.671 [2024-11-20 11:33:01.931547] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:20:16.672 11:33:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:20:16.938 NVMe0n1 00:20:16.938 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:20:17.197 00:20:17.197 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:20:17.765 00:20:17.765 11:33:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:17.765 11:33:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:20:18.024 11:33:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:18.283 11:33:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:20:21.571 11:33:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:21.571 11:33:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:20:21.571 11:33:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:21.571 11:33:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=88597 00:20:21.571 11:33:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 88597 00:20:22.948 { 00:20:22.948 "results": [ 00:20:22.948 { 00:20:22.948 "job": "NVMe0n1", 00:20:22.948 "core_mask": "0x1", 00:20:22.948 "workload": "verify", 00:20:22.948 "status": "finished", 00:20:22.948 "verify_range": { 00:20:22.948 "start": 0, 00:20:22.948 "length": 16384 00:20:22.948 }, 00:20:22.948 "queue_depth": 128, 00:20:22.948 "io_size": 4096, 00:20:22.948 "runtime": 1.006866, 00:20:22.948 "iops": 8775.745729819062, 00:20:22.948 "mibps": 34.28025675710571, 00:20:22.948 "io_failed": 0, 00:20:22.948 "io_timeout": 0, 00:20:22.948 "avg_latency_us": 14506.640609078564, 00:20:22.948 "min_latency_us": 778.24, 00:20:22.948 "max_latency_us": 14834.967272727272 00:20:22.948 } 00:20:22.948 ], 00:20:22.948 "core_count": 1 00:20:22.948 } 00:20:22.948 11:33:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:22.948 [2024-11-20 11:33:01.015505] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:20:22.948 [2024-11-20 11:33:01.015610] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88468 ] 00:20:22.948 [2024-11-20 11:33:01.157898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:22.948 [2024-11-20 11:33:01.193355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:22.948 [2024-11-20 11:33:03.831515] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:20:22.948 [2024-11-20 11:33:03.831687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.948 [2024-11-20 11:33:03.831723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.948 [2024-11-20 11:33:03.831746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.948 [2024-11-20 11:33:03.831764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.948 [2024-11-20 11:33:03.831782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.948 [2024-11-20 11:33:03.831799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.948 [2024-11-20 11:33:03.831817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.948 [2024-11-20 11:33:03.831834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.948 [2024-11-20 11:33:03.831852] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:20:22.948 [2024-11-20 11:33:03.831918] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:20:22.948 [2024-11-20 11:33:03.831961] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1749f30 (9): Bad file descriptor 00:20:22.948 [2024-11-20 11:33:03.839730] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:20:22.948 Running I/O for 1 seconds... 00:20:22.948 8708.00 IOPS, 34.02 MiB/s 00:20:22.948 Latency(us) 00:20:22.948 [2024-11-20T11:33:08.537Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:22.948 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:22.948 Verification LBA range: start 0x0 length 0x4000 00:20:22.948 NVMe0n1 : 1.01 8775.75 34.28 0.00 0.00 14506.64 778.24 14834.97 00:20:22.948 [2024-11-20T11:33:08.537Z] =================================================================================================================== 00:20:22.948 [2024-11-20T11:33:08.537Z] Total : 8775.75 34.28 0.00 0.00 14506.64 778.24 14834.97 00:20:22.948 11:33:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:22.948 11:33:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:20:23.208 11:33:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:23.466 11:33:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:23.466 11:33:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:20:23.731 11:33:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:23.990 11:33:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:20:27.276 11:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:20:27.276 11:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:27.276 11:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 88468 00:20:27.276 11:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 88468 ']' 00:20:27.276 11:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 88468 00:20:27.276 11:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:20:27.276 11:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:27.276 11:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88468 00:20:27.276 killing process with pid 88468 00:20:27.276 11:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:27.276 11:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:27.276 11:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88468' 00:20:27.276 11:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 88468 00:20:27.276 11:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 88468 00:20:27.554 11:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:20:27.554 11:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:27.812 11:33:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:20:27.812 11:33:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:27.812 11:33:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:20:27.812 11:33:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:27.812 11:33:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:20:27.812 11:33:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:27.812 11:33:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:20:27.812 11:33:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:27.812 11:33:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:27.812 rmmod nvme_tcp 00:20:27.812 rmmod nvme_fabrics 00:20:27.812 rmmod nvme_keyring 00:20:27.812 11:33:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:27.812 11:33:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:20:27.812 11:33:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:20:27.813 11:33:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 88131 ']' 00:20:27.813 11:33:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 88131 00:20:27.813 11:33:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 88131 ']' 00:20:27.813 11:33:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 88131 00:20:27.813 11:33:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:20:27.813 11:33:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:27.813 11:33:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88131 00:20:27.813 killing process with pid 88131 00:20:27.813 11:33:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:27.813 11:33:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:27.813 11:33:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88131' 00:20:27.813 11:33:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 88131 00:20:27.813 11:33:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 88131 00:20:28.071 11:33:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:28.071 11:33:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:28.071 11:33:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:28.071 11:33:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:20:28.071 11:33:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:20:28.071 11:33:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:28.071 11:33:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:20:28.071 11:33:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:28.071 11:33:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:28.071 11:33:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:28.071 11:33:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:28.071 11:33:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:28.071 11:33:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:28.071 11:33:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:28.071 11:33:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:28.071 11:33:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:28.071 11:33:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:28.071 11:33:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:28.071 11:33:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:28.329 11:33:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:28.329 11:33:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:28.329 11:33:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:28.329 11:33:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:28.329 11:33:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:28.329 11:33:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:28.329 11:33:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:28.329 11:33:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:20:28.329 00:20:28.329 real 0m32.241s 00:20:28.329 user 2m5.725s 00:20:28.329 sys 0m4.613s 00:20:28.329 11:33:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:28.329 11:33:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:28.329 ************************************ 00:20:28.329 END TEST nvmf_failover 00:20:28.329 ************************************ 00:20:28.329 11:33:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:20:28.329 11:33:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:28.329 11:33:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:28.329 11:33:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:28.329 ************************************ 00:20:28.329 START TEST nvmf_host_discovery 00:20:28.329 ************************************ 00:20:28.329 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:20:28.329 * Looking for test storage... 00:20:28.329 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:28.329 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:28.329 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:20:28.329 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:28.590 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:28.590 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:28.590 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:28.590 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:28.590 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:20:28.590 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:20:28.590 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:20:28.590 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:20:28.590 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:20:28.590 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:20:28.590 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:20:28.590 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:28.590 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:20:28.590 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:20:28.590 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:28.590 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:28.590 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:20:28.590 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:20:28.590 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:28.590 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:20:28.590 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:20:28.590 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:20:28.590 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:20:28.590 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:28.590 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:20:28.590 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:20:28.590 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:28.590 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:28.590 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:20:28.590 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:28.590 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:28.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:28.590 --rc genhtml_branch_coverage=1 00:20:28.590 --rc genhtml_function_coverage=1 00:20:28.590 --rc genhtml_legend=1 00:20:28.590 --rc geninfo_all_blocks=1 00:20:28.590 --rc geninfo_unexecuted_blocks=1 00:20:28.590 00:20:28.590 ' 00:20:28.590 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:28.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:28.590 --rc genhtml_branch_coverage=1 00:20:28.590 --rc genhtml_function_coverage=1 00:20:28.590 --rc genhtml_legend=1 00:20:28.590 --rc geninfo_all_blocks=1 00:20:28.590 --rc geninfo_unexecuted_blocks=1 00:20:28.590 00:20:28.590 ' 00:20:28.590 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:28.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:28.590 --rc genhtml_branch_coverage=1 00:20:28.590 --rc genhtml_function_coverage=1 00:20:28.590 --rc genhtml_legend=1 00:20:28.590 --rc geninfo_all_blocks=1 00:20:28.590 --rc geninfo_unexecuted_blocks=1 00:20:28.590 00:20:28.590 ' 00:20:28.590 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:28.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:28.590 --rc genhtml_branch_coverage=1 00:20:28.590 --rc genhtml_function_coverage=1 00:20:28.590 --rc genhtml_legend=1 00:20:28.590 --rc geninfo_all_blocks=1 00:20:28.590 --rc geninfo_unexecuted_blocks=1 00:20:28.590 00:20:28.590 ' 00:20:28.590 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:28.590 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:20:28.590 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:28.590 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:28.590 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:28.590 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:28.590 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:28.590 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:28.590 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:28.590 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:28.590 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:28.590 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:28.590 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:20:28.590 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=806d442c-08ba-4719-b76d-33169283459a 00:20:28.590 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:28.590 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:28.590 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:28.590 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:28.590 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:28.590 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:20:28.590 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:28.590 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:28.590 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:28.590 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.590 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.590 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.590 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:20:28.590 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.590 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:20:28.590 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:28.590 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:28.590 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:28.590 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:28.590 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:28.590 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:28.590 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:28.591 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:28.591 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:28.591 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:28.591 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:20:28.591 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:20:28.591 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:20:28.591 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:20:28.591 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:20:28.591 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:20:28.591 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:20:28.591 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:28.591 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:28.591 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:28.591 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:28.591 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:28.591 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:28.591 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:28.591 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:28.591 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:28.591 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:28.591 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:28.591 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:28.591 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:28.591 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:28.591 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:28.591 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:28.591 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:28.591 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:28.591 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:28.591 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:28.591 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:28.591 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:28.591 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:28.591 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:28.591 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:28.591 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:28.591 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:28.591 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:28.591 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:28.591 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:28.591 11:33:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:28.591 Cannot find device "nvmf_init_br" 00:20:28.591 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:20:28.591 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:28.591 Cannot find device "nvmf_init_br2" 00:20:28.591 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:20:28.591 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:28.591 Cannot find device "nvmf_tgt_br" 00:20:28.591 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:20:28.591 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:28.591 Cannot find device "nvmf_tgt_br2" 00:20:28.591 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:20:28.591 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:28.591 Cannot find device "nvmf_init_br" 00:20:28.591 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:20:28.591 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:28.591 Cannot find device "nvmf_init_br2" 00:20:28.591 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:20:28.591 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:28.591 Cannot find device "nvmf_tgt_br" 00:20:28.591 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:20:28.591 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:28.591 Cannot find device "nvmf_tgt_br2" 00:20:28.591 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:20:28.591 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:28.591 Cannot find device "nvmf_br" 00:20:28.591 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:20:28.591 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:28.591 Cannot find device "nvmf_init_if" 00:20:28.591 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:20:28.591 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:28.591 Cannot find device "nvmf_init_if2" 00:20:28.591 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:20:28.591 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:28.591 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:28.591 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:20:28.591 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:28.591 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:28.591 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:20:28.591 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:28.591 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:28.591 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:28.591 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:28.591 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:28.591 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:28.591 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:28.848 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:28.848 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:28.848 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:28.848 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:28.848 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:28.848 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:28.848 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:28.848 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:28.848 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:28.848 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:28.848 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:28.848 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:28.848 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:28.848 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:28.848 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:28.848 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:28.848 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:28.848 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:28.848 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:28.848 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:28.848 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:28.848 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:28.848 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:28.848 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:28.848 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:28.848 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:28.848 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:28.848 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:20:28.848 00:20:28.848 --- 10.0.0.3 ping statistics --- 00:20:28.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:28.848 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:20:28.848 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:28.848 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:28.848 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.039 ms 00:20:28.848 00:20:28.848 --- 10.0.0.4 ping statistics --- 00:20:28.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:28.848 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:20:28.848 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:28.848 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:28.848 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:20:28.848 00:20:28.848 --- 10.0.0.1 ping statistics --- 00:20:28.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:28.848 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:20:28.848 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:28.848 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:28.848 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:20:28.848 00:20:28.848 --- 10.0.0.2 ping statistics --- 00:20:28.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:28.848 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:20:28.849 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:28.849 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@461 -- # return 0 00:20:28.849 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:28.849 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:28.849 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:28.849 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:28.849 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:28.849 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:28.849 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:28.849 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:20:28.849 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:28.849 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:28.849 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:28.849 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=88960 00:20:28.849 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:28.849 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 88960 00:20:28.849 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 88960 ']' 00:20:28.849 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:28.849 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:28.849 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:28.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:28.849 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:28.849 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:29.107 [2024-11-20 11:33:14.445916] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:20:29.107 [2024-11-20 11:33:14.446015] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:29.107 [2024-11-20 11:33:14.590302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:29.107 [2024-11-20 11:33:14.623911] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:29.107 [2024-11-20 11:33:14.623962] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:29.107 [2024-11-20 11:33:14.623973] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:29.107 [2024-11-20 11:33:14.623982] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:29.107 [2024-11-20 11:33:14.623989] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:29.107 [2024-11-20 11:33:14.624302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:29.366 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:29.366 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:20:29.366 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:29.366 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:29.366 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:29.366 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:29.366 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:29.366 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.366 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:29.366 [2024-11-20 11:33:14.761426] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:29.366 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.366 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:20:29.366 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.366 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:29.366 [2024-11-20 11:33:14.773554] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:20:29.366 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.366 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:20:29.366 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.366 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:29.366 null0 00:20:29.366 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.366 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:20:29.366 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.366 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:29.366 null1 00:20:29.366 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.366 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:20:29.366 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.366 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:29.366 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.366 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=88996 00:20:29.366 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:20:29.366 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 88996 /tmp/host.sock 00:20:29.366 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 88996 ']' 00:20:29.366 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:20:29.366 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:29.366 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:20:29.366 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:20:29.366 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:29.366 11:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:29.366 [2024-11-20 11:33:14.863225] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:20:29.366 [2024-11-20 11:33:14.863326] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88996 ] 00:20:29.626 [2024-11-20 11:33:15.015525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:29.626 [2024-11-20 11:33:15.053928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:29.626 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:29.626 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:20:29.626 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:29.626 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:20:29.626 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.626 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:29.626 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.626 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:20:29.626 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.626 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:29.626 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.626 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:20:29.626 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:20:29.626 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:29.626 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:29.626 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.626 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:29.626 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:29.626 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:29.626 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.885 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:20:29.885 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:20:29.885 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:29.885 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:29.885 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.885 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:29.885 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:29.885 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:29.885 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.885 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:20:29.885 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:20:29.885 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.885 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:29.885 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.885 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:20:29.885 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:29.885 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.885 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:29.885 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:29.885 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:29.885 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:29.885 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.885 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:20:29.885 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:20:29.885 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:29.885 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:29.885 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.885 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:29.885 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:29.885 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:29.885 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.885 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:20:29.885 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:20:29.885 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.885 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:29.885 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.885 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:20:29.885 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:29.885 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:29.885 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.885 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:29.885 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:29.885 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:29.886 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.886 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:20:29.886 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:20:29.886 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:29.886 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:29.886 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.886 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:29.886 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:29.886 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:30.145 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.145 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:20:30.145 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:30.145 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.145 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:30.145 [2024-11-20 11:33:15.513736] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:30.145 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.145 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:20:30.145 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:30.145 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.145 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:30.145 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:30.145 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:30.145 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:30.145 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.145 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:20:30.145 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:20:30.145 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:30.145 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.145 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:30.145 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:30.145 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:30.145 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:30.145 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.145 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:20:30.145 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:20:30.145 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:20:30.145 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:20:30.145 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:20:30.145 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:30.145 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:30.145 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:20:30.145 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:20:30.145 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:20:30.145 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:20:30.145 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.145 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:30.145 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.145 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:20:30.145 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:20:30.145 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:20:30.145 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:30.145 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:20:30.145 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.145 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:30.145 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.145 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:20:30.145 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:20:30.145 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:30.145 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:30.145 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:20:30.145 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:20:30.145 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:30.145 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:30.145 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:30.145 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.145 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:30.145 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:30.145 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.404 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:20:30.404 11:33:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:20:30.664 [2024-11-20 11:33:16.166827] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:20:30.664 [2024-11-20 11:33:16.166879] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:20:30.664 [2024-11-20 11:33:16.166910] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:30.925 [2024-11-20 11:33:16.253019] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:20:30.925 [2024-11-20 11:33:16.307561] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:20:30.925 [2024-11-20 11:33:16.308544] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x121fba0:1 started. 00:20:30.925 [2024-11-20 11:33:16.310385] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:20:30.925 [2024-11-20 11:33:16.310416] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:20:30.925 [2024-11-20 11:33:16.315250] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x121fba0 was disconnected and freed. delete nvme_qpair. 00:20:31.185 11:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:31.185 11:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:20:31.185 11:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:20:31.185 11:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:31.185 11:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:31.185 11:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:31.185 11:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:31.185 11:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.185 11:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:31.185 11:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.444 11:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.444 11:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:31.444 11:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:20:31.444 11:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:20:31.444 11:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:31.444 11:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:31.444 11:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:20:31.444 11:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:20:31.444 11:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:31.444 11:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.444 11:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:31.444 11:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:31.444 11:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:31.444 11:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:31.444 11:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.444 11:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:20:31.444 11:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:31.444 11:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:20:31.444 11:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:20:31.444 11:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:31.444 11:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:31.444 11:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:20:31.444 11:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:20:31.444 11:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:20:31.444 11:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:31.444 11:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.444 11:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:20:31.444 11:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:31.444 11:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:20:31.444 11:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.444 11:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:20:31.444 11:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:31.444 11:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:20:31.444 11:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:20:31.444 11:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:20:31.444 11:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:20:31.445 11:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:31.445 11:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:31.445 11:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:20:31.445 11:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:20:31.445 11:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:20:31.445 11:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.445 11:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:31.445 11:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:20:31.445 11:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.445 11:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:20:31.445 11:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:20:31.445 11:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:20:31.445 11:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:31.445 11:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:20:31.445 11:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.445 11:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:31.445 11:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.445 11:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:20:31.445 11:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:20:31.445 11:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:31.445 11:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:31.445 11:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:20:31.445 11:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:20:31.445 11:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:31.445 11:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:31.445 11:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:31.445 11:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.445 11:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:31.445 [2024-11-20 11:33:16.979211] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x119a570:1 started. 00:20:31.445 11:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:31.445 [2024-11-20 11:33:16.985701] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x119a570 was disconnected and freed. delete nvme_qpair. 00:20:31.445 11:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.445 11:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:20:31.445 11:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:31.445 11:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:20:31.445 11:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:20:31.445 11:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:20:31.445 11:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:20:31.445 11:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:31.706 11:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:31.706 11:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:20:31.706 11:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:20:31.706 11:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:20:31.706 11:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:20:31.706 11:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.706 11:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:31.706 11:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.706 11:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:20:31.706 11:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:20:31.706 11:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:20:31.706 11:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:31.706 11:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:20:31.706 11:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.706 11:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:31.706 [2024-11-20 11:33:17.102407] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:20:31.706 [2024-11-20 11:33:17.103332] bdev_nvme.c:7461:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:20:31.706 [2024-11-20 11:33:17.103372] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:31.706 11:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.706 11:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:20:31.706 11:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:20:31.706 11:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:31.706 11:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:31.706 11:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:20:31.706 11:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:20:31.706 11:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:31.706 11:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:31.706 11:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.706 11:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:31.706 11:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:31.706 11:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:31.706 11:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.706 11:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.706 11:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:31.706 11:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:20:31.706 11:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:20:31.706 11:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:31.706 11:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:31.706 11:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:20:31.706 11:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:20:31.706 11:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:31.706 11:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.706 11:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:31.706 11:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:31.706 11:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:31.706 11:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:31.706 [2024-11-20 11:33:17.189408] bdev_nvme.c:7403:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:20:31.706 11:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.706 11:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:20:31.706 11:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:31.706 11:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:20:31.706 11:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:20:31.706 11:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:31.706 11:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:31.706 11:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:20:31.706 11:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:20:31.706 11:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:20:31.706 11:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:31.706 11:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.706 11:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:31.706 11:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:20:31.706 11:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:20:31.706 11:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.706 [2024-11-20 11:33:17.249930] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:20:31.706 [2024-11-20 11:33:17.249994] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:20:31.706 [2024-11-20 11:33:17.250008] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:20:31.706 [2024-11-20 11:33:17.250021] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:20:31.706 11:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:20:31.706 11:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:20:33.087 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:33.087 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:20:33.087 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:20:33.087 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:20:33.087 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.087 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:33.087 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:33.087 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:20:33.087 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:20:33.087 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.087 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:20:33.087 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:33.087 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:20:33.087 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:20:33.087 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:20:33.087 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:20:33.087 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:33.087 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:33.087 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:20:33.087 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:20:33.087 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:20:33.088 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.088 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:33.088 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:20:33.088 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.088 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:20:33.088 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:20:33.088 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:20:33.088 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:33.088 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:33.088 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.088 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:33.088 [2024-11-20 11:33:18.392915] bdev_nvme.c:7461:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:20:33.088 [2024-11-20 11:33:18.392972] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:33.088 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.088 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:20:33.088 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:20:33.088 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:33.088 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:33.088 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:20:33.088 [2024-11-20 11:33:18.398608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:33.088 [2024-11-20 11:33:18.398673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.088 [2024-11-20 11:33:18.398694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:33.088 [2024-11-20 11:33:18.398710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.088 [2024-11-20 11:33:18.398727] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:33.088 [2024-11-20 11:33:18.398742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.088 [2024-11-20 11:33:18.398758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:33.088 [2024-11-20 11:33:18.398772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.088 [2024-11-20 11:33:18.398812] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2280 is same with the state(6) to be set 00:20:33.088 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:20:33.088 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:33.088 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:33.088 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:33.088 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.088 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:33.088 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:33.088 [2024-11-20 11:33:18.408554] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11f2280 (9): Bad file descriptor 00:20:33.088 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.088 [2024-11-20 11:33:18.418563] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:20:33.088 [2024-11-20 11:33:18.418595] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:20:33.088 [2024-11-20 11:33:18.418607] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:20:33.088 [2024-11-20 11:33:18.418625] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:20:33.088 [2024-11-20 11:33:18.418679] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:20:33.088 [2024-11-20 11:33:18.418777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:33.088 [2024-11-20 11:33:18.418802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11f2280 with addr=10.0.0.3, port=4420 00:20:33.088 [2024-11-20 11:33:18.418816] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2280 is same with the state(6) to be set 00:20:33.088 [2024-11-20 11:33:18.418835] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11f2280 (9): Bad file descriptor 00:20:33.088 [2024-11-20 11:33:18.418851] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:20:33.088 [2024-11-20 11:33:18.418861] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:20:33.088 [2024-11-20 11:33:18.418872] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:20:33.088 [2024-11-20 11:33:18.418881] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:20:33.088 [2024-11-20 11:33:18.418888] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:20:33.088 [2024-11-20 11:33:18.418894] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:20:33.088 [2024-11-20 11:33:18.428675] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:20:33.088 [2024-11-20 11:33:18.428707] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:20:33.088 [2024-11-20 11:33:18.428736] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:20:33.088 [2024-11-20 11:33:18.428745] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:20:33.088 [2024-11-20 11:33:18.428774] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:20:33.088 [2024-11-20 11:33:18.428879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:33.088 [2024-11-20 11:33:18.428905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11f2280 with addr=10.0.0.3, port=4420 00:20:33.088 [2024-11-20 11:33:18.428916] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2280 is same with the state(6) to be set 00:20:33.088 [2024-11-20 11:33:18.428954] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11f2280 (9): Bad file descriptor 00:20:33.088 [2024-11-20 11:33:18.428991] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:20:33.088 [2024-11-20 11:33:18.429030] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:20:33.088 [2024-11-20 11:33:18.429053] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:20:33.088 [2024-11-20 11:33:18.429063] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:20:33.088 [2024-11-20 11:33:18.429069] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:20:33.088 [2024-11-20 11:33:18.429074] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:20:33.088 [2024-11-20 11:33:18.438792] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:20:33.088 [2024-11-20 11:33:18.438825] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:20:33.088 [2024-11-20 11:33:18.438833] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:20:33.088 [2024-11-20 11:33:18.438861] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:20:33.088 [2024-11-20 11:33:18.438889] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:20:33.088 [2024-11-20 11:33:18.438972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:33.088 [2024-11-20 11:33:18.439020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11f2280 with addr=10.0.0.3, port=4420 00:20:33.088 [2024-11-20 11:33:18.439034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2280 is same with the state(6) to be set 00:20:33.088 [2024-11-20 11:33:18.439053] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11f2280 (9): Bad file descriptor 00:20:33.088 [2024-11-20 11:33:18.439149] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:20:33.088 [2024-11-20 11:33:18.439166] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:20:33.088 [2024-11-20 11:33:18.439177] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:20:33.088 [2024-11-20 11:33:18.439186] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:20:33.088 [2024-11-20 11:33:18.439193] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:20:33.088 [2024-11-20 11:33:18.439198] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:20:33.088 [2024-11-20 11:33:18.448909] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:20:33.088 [2024-11-20 11:33:18.448941] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:20:33.088 [2024-11-20 11:33:18.448949] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:20:33.088 [2024-11-20 11:33:18.448975] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:20:33.088 [2024-11-20 11:33:18.449010] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:20:33.088 [2024-11-20 11:33:18.449091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:33.088 [2024-11-20 11:33:18.449137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11f2280 with addr=10.0.0.3, port=4420 00:20:33.088 [2024-11-20 11:33:18.449150] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2280 is same with the state(6) to be set 00:20:33.088 [2024-11-20 11:33:18.449168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11f2280 (9): Bad file descriptor 00:20:33.088 [2024-11-20 11:33:18.449228] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:20:33.088 [2024-11-20 11:33:18.449242] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:20:33.089 [2024-11-20 11:33:18.449276] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:20:33.089 [2024-11-20 11:33:18.449287] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:20:33.089 [2024-11-20 11:33:18.449293] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:20:33.089 [2024-11-20 11:33:18.449298] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:20:33.089 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.089 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:33.089 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:20:33.089 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:20:33.089 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:33.089 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:33.089 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:20:33.089 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:20:33.089 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:33.089 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:33.089 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:33.089 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:33.089 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.089 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:33.089 [2024-11-20 11:33:18.459029] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:20:33.089 [2024-11-20 11:33:18.459059] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:20:33.089 [2024-11-20 11:33:18.459067] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:20:33.089 [2024-11-20 11:33:18.459073] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:20:33.089 [2024-11-20 11:33:18.459122] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:20:33.089 [2024-11-20 11:33:18.459205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:33.089 [2024-11-20 11:33:18.459253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11f2280 with addr=10.0.0.3, port=4420 00:20:33.089 [2024-11-20 11:33:18.459274] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2280 is same with the state(6) to be set 00:20:33.089 [2024-11-20 11:33:18.459294] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11f2280 (9): Bad file descriptor 00:20:33.089 [2024-11-20 11:33:18.459346] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:20:33.089 [2024-11-20 11:33:18.459359] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:20:33.089 [2024-11-20 11:33:18.459369] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:20:33.089 [2024-11-20 11:33:18.459400] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:20:33.089 [2024-11-20 11:33:18.459408] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:20:33.089 [2024-11-20 11:33:18.459414] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:20:33.089 [2024-11-20 11:33:18.469135] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:20:33.089 [2024-11-20 11:33:18.469187] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:20:33.089 [2024-11-20 11:33:18.469197] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:20:33.089 [2024-11-20 11:33:18.469203] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:20:33.089 [2024-11-20 11:33:18.469253] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:20:33.089 [2024-11-20 11:33:18.469365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:33.089 [2024-11-20 11:33:18.469409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11f2280 with addr=10.0.0.3, port=4420 00:20:33.089 [2024-11-20 11:33:18.469424] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2280 is same with the state(6) to be set 00:20:33.089 [2024-11-20 11:33:18.469441] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11f2280 (9): Bad file descriptor 00:20:33.089 [2024-11-20 11:33:18.469477] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:20:33.089 [2024-11-20 11:33:18.469490] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:20:33.089 [2024-11-20 11:33:18.469501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:20:33.089 [2024-11-20 11:33:18.469512] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:20:33.089 [2024-11-20 11:33:18.469518] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:20:33.089 [2024-11-20 11:33:18.469523] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:20:33.089 [2024-11-20 11:33:18.479143] bdev_nvme.c:7266:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:20:33.089 [2024-11-20 11:33:18.479177] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:20:33.089 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.089 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:20:33.089 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:33.089 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:20:33.089 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:20:33.089 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:33.089 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:33.089 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:20:33.089 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:20:33.089 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:20:33.089 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:33.089 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:20:33.089 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.089 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:33.089 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:20:33.089 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.089 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:20:33.089 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:33.089 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:20:33.089 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:20:33.089 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:20:33.089 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:20:33.089 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:33.089 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:33.089 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:20:33.089 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:20:33.089 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:20:33.089 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:20:33.089 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.089 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:33.089 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.089 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:20:33.089 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:20:33.089 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:20:33.089 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:33.089 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:20:33.089 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.089 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:33.089 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.089 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:20:33.089 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:20:33.089 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:33.089 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:33.089 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:20:33.089 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:20:33.089 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:33.089 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:33.090 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:33.090 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:33.090 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.090 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:33.090 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.353 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:20:33.353 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:33.353 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:20:33.353 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:20:33.353 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:33.353 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:33.353 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:20:33.353 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:20:33.353 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:33.353 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.353 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:33.353 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:33.353 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:33.353 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:33.353 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.353 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:20:33.353 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:33.353 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:20:33.353 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:20:33.353 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:20:33.354 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:20:33.354 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:33.354 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:33.354 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:20:33.354 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:20:33.354 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:20:33.354 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.354 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:20:33.354 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:33.354 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.354 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:20:33.354 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:20:33.354 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:20:33.354 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:33.354 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:33.354 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.354 11:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:34.316 [2024-11-20 11:33:19.832988] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:20:34.316 [2024-11-20 11:33:19.833031] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:20:34.316 [2024-11-20 11:33:19.833055] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:34.576 [2024-11-20 11:33:19.919125] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:20:34.576 [2024-11-20 11:33:19.977631] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:20:34.576 [2024-11-20 11:33:19.978312] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x122b900:1 started. 00:20:34.576 [2024-11-20 11:33:19.980420] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:20:34.576 [2024-11-20 11:33:19.980471] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:20:34.576 11:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.576 [2024-11-20 11:33:19.981841] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x122b900 was disconnected and freed. delete nvme_qpair. 00:20:34.576 11:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:34.576 11:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:20:34.576 11:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:34.576 11:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:34.576 11:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:34.576 11:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:34.576 11:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:34.576 11:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:34.576 11:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.576 11:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:34.576 2024/11/20 11:33:19 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.3 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:20:34.576 request: 00:20:34.576 { 00:20:34.576 "method": "bdev_nvme_start_discovery", 00:20:34.576 "params": { 00:20:34.576 "name": "nvme", 00:20:34.576 "trtype": "tcp", 00:20:34.576 "traddr": "10.0.0.3", 00:20:34.576 "adrfam": "ipv4", 00:20:34.576 "trsvcid": "8009", 00:20:34.576 "hostnqn": "nqn.2021-12.io.spdk:test", 00:20:34.576 "wait_for_attach": true 00:20:34.576 } 00:20:34.576 } 00:20:34.576 Got JSON-RPC error response 00:20:34.576 GoRPCClient: error on JSON-RPC call 00:20:34.576 11:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:34.576 11:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:20:34.576 11:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:34.576 11:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:34.576 11:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:34.576 11:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:20:34.576 11:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:20:34.576 11:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:20:34.576 11:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:20:34.576 11:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:20:34.576 11:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.576 11:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:34.576 11:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.576 11:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:20:34.576 11:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:20:34.576 11:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:34.576 11:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.576 11:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:34.576 11:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:34.576 11:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:34.576 11:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:34.576 11:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.576 11:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:20:34.576 11:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:34.576 11:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:20:34.576 11:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:34.576 11:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:34.576 11:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:34.576 11:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:34.576 11:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:34.576 11:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:34.576 11:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.576 11:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:34.576 2024/11/20 11:33:20 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.3 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:20:34.576 request: 00:20:34.576 { 00:20:34.576 "method": "bdev_nvme_start_discovery", 00:20:34.576 "params": { 00:20:34.576 "name": "nvme_second", 00:20:34.576 "trtype": "tcp", 00:20:34.576 "traddr": "10.0.0.3", 00:20:34.576 "adrfam": "ipv4", 00:20:34.576 "trsvcid": "8009", 00:20:34.576 "hostnqn": "nqn.2021-12.io.spdk:test", 00:20:34.576 "wait_for_attach": true 00:20:34.576 } 00:20:34.576 } 00:20:34.576 Got JSON-RPC error response 00:20:34.576 GoRPCClient: error on JSON-RPC call 00:20:34.576 11:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:34.576 11:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:20:34.576 11:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:34.576 11:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:34.576 11:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:34.576 11:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:20:34.576 11:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:20:34.576 11:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.576 11:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:34.577 11:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:20:34.577 11:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:20:34.577 11:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:20:34.577 11:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.834 11:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:20:34.834 11:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:20:34.834 11:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:34.834 11:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.834 11:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:34.834 11:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:34.834 11:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:34.834 11:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:34.834 11:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.834 11:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:20:34.835 11:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:20:34.835 11:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:20:34.835 11:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:20:34.835 11:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:34.835 11:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:34.835 11:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:34.835 11:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:34.835 11:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:20:34.835 11:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.835 11:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:35.771 [2024-11-20 11:33:21.252872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:35.771 [2024-11-20 11:33:21.252941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x122b700 with addr=10.0.0.3, port=8010 00:20:35.771 [2024-11-20 11:33:21.252965] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:20:35.771 [2024-11-20 11:33:21.252976] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:20:35.771 [2024-11-20 11:33:21.252985] bdev_nvme.c:7547:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:20:36.709 [2024-11-20 11:33:22.252867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:36.709 [2024-11-20 11:33:22.252937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x122b700 with addr=10.0.0.3, port=8010 00:20:36.709 [2024-11-20 11:33:22.252960] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:20:36.709 [2024-11-20 11:33:22.252971] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:20:36.709 [2024-11-20 11:33:22.252981] bdev_nvme.c:7547:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:20:38.084 [2024-11-20 11:33:23.252703] bdev_nvme.c:7522:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:20:38.084 2024/11/20 11:33:23 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.3 trsvcid:8010 trtype:tcp wait_for_attach:%!s(bool=false)], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:20:38.084 request: 00:20:38.084 { 00:20:38.084 "method": "bdev_nvme_start_discovery", 00:20:38.084 "params": { 00:20:38.084 "name": "nvme_second", 00:20:38.084 "trtype": "tcp", 00:20:38.084 "traddr": "10.0.0.3", 00:20:38.084 "adrfam": "ipv4", 00:20:38.084 "trsvcid": "8010", 00:20:38.084 "hostnqn": "nqn.2021-12.io.spdk:test", 00:20:38.084 "wait_for_attach": false, 00:20:38.084 "attach_timeout_ms": 3000 00:20:38.084 } 00:20:38.084 } 00:20:38.085 Got JSON-RPC error response 00:20:38.085 GoRPCClient: error on JSON-RPC call 00:20:38.085 11:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:38.085 11:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:20:38.085 11:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:38.085 11:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:38.085 11:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:38.085 11:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:20:38.085 11:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:20:38.085 11:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.085 11:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:20:38.085 11:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:38.085 11:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:20:38.085 11:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:20:38.085 11:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.085 11:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:20:38.085 11:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:20:38.085 11:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 88996 00:20:38.085 11:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:20:38.085 11:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:38.085 11:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:20:38.085 11:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:38.085 11:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:20:38.085 11:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:38.085 11:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:38.085 rmmod nvme_tcp 00:20:38.085 rmmod nvme_fabrics 00:20:38.085 rmmod nvme_keyring 00:20:38.085 11:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:38.085 11:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:20:38.085 11:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:20:38.085 11:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 88960 ']' 00:20:38.085 11:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 88960 00:20:38.085 11:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 88960 ']' 00:20:38.085 11:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 88960 00:20:38.085 11:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:20:38.085 11:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:38.085 11:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88960 00:20:38.085 11:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:38.085 11:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:38.085 killing process with pid 88960 00:20:38.085 11:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88960' 00:20:38.085 11:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 88960 00:20:38.085 11:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 88960 00:20:38.085 11:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:38.085 11:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:38.085 11:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:38.085 11:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:20:38.085 11:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:20:38.085 11:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:20:38.085 11:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:38.085 11:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:38.085 11:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:38.085 11:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:38.085 11:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:38.085 11:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:38.085 11:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:38.085 11:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:38.085 11:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:38.344 11:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:38.344 11:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:38.344 11:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:38.344 11:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:38.344 11:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:38.344 11:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:38.344 11:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:38.344 11:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:38.344 11:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:38.344 11:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:38.344 11:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:38.344 11:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:20:38.344 00:20:38.344 real 0m10.030s 00:20:38.344 user 0m19.652s 00:20:38.344 sys 0m1.637s 00:20:38.344 11:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:38.344 ************************************ 00:20:38.344 11:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:38.344 END TEST nvmf_host_discovery 00:20:38.344 ************************************ 00:20:38.344 11:33:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:20:38.344 11:33:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:38.344 11:33:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:38.344 11:33:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.344 ************************************ 00:20:38.344 START TEST nvmf_host_multipath_status 00:20:38.344 ************************************ 00:20:38.344 11:33:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:20:38.604 * Looking for test storage... 00:20:38.604 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:38.604 11:33:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:38.604 11:33:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:20:38.604 11:33:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:38.604 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:38.604 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:38.604 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:38.604 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:38.604 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:20:38.604 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:20:38.604 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:20:38.604 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:20:38.604 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:20:38.604 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:20:38.604 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:20:38.604 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:38.604 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:20:38.604 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:20:38.604 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:38.604 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:38.604 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:20:38.604 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:20:38.604 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:38.604 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:20:38.604 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:20:38.604 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:20:38.604 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:20:38.604 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:38.604 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:20:38.604 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:20:38.604 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:38.604 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:38.604 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:20:38.604 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:38.604 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:38.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:38.604 --rc genhtml_branch_coverage=1 00:20:38.604 --rc genhtml_function_coverage=1 00:20:38.604 --rc genhtml_legend=1 00:20:38.604 --rc geninfo_all_blocks=1 00:20:38.604 --rc geninfo_unexecuted_blocks=1 00:20:38.604 00:20:38.604 ' 00:20:38.604 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:38.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:38.604 --rc genhtml_branch_coverage=1 00:20:38.604 --rc genhtml_function_coverage=1 00:20:38.604 --rc genhtml_legend=1 00:20:38.604 --rc geninfo_all_blocks=1 00:20:38.604 --rc geninfo_unexecuted_blocks=1 00:20:38.604 00:20:38.604 ' 00:20:38.604 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:38.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:38.604 --rc genhtml_branch_coverage=1 00:20:38.604 --rc genhtml_function_coverage=1 00:20:38.604 --rc genhtml_legend=1 00:20:38.604 --rc geninfo_all_blocks=1 00:20:38.604 --rc geninfo_unexecuted_blocks=1 00:20:38.604 00:20:38.604 ' 00:20:38.604 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:38.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:38.604 --rc genhtml_branch_coverage=1 00:20:38.604 --rc genhtml_function_coverage=1 00:20:38.604 --rc genhtml_legend=1 00:20:38.604 --rc geninfo_all_blocks=1 00:20:38.604 --rc geninfo_unexecuted_blocks=1 00:20:38.604 00:20:38.604 ' 00:20:38.604 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:38.604 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:20:38.604 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:38.604 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:38.604 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:38.604 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:38.604 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:38.604 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:38.604 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:38.604 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:38.604 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:38.604 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:38.604 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:20:38.604 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=806d442c-08ba-4719-b76d-33169283459a 00:20:38.604 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:38.604 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:38.604 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:38.604 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:38.604 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:38.605 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:20:38.605 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:38.605 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:38.605 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:38.605 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.605 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.605 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.605 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:20:38.605 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.605 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:20:38.605 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:38.605 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:38.605 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:38.605 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:38.605 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:38.605 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:38.605 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:38.605 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:38.605 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:38.605 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:38.605 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:38.605 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:38.605 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:38.605 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:20:38.605 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:38.605 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:20:38.605 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:20:38.605 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:38.605 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:38.605 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:38.605 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:38.605 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:38.605 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:38.605 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:38.605 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:38.605 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:38.605 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:38.605 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:38.605 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:38.605 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:38.605 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:38.605 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:38.605 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:38.605 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:38.605 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:38.605 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:38.605 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:38.605 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:38.605 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:38.605 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:38.605 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:38.605 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:38.605 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:38.605 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:38.605 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:38.605 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:38.605 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:38.605 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:38.605 Cannot find device "nvmf_init_br" 00:20:38.605 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:20:38.605 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:38.605 Cannot find device "nvmf_init_br2" 00:20:38.605 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:20:38.605 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:38.605 Cannot find device "nvmf_tgt_br" 00:20:38.605 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:20:38.605 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:38.605 Cannot find device "nvmf_tgt_br2" 00:20:38.605 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:20:38.605 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:38.605 Cannot find device "nvmf_init_br" 00:20:38.605 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:20:38.605 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:38.605 Cannot find device "nvmf_init_br2" 00:20:38.605 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:20:38.605 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:38.605 Cannot find device "nvmf_tgt_br" 00:20:38.605 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:20:38.605 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:38.605 Cannot find device "nvmf_tgt_br2" 00:20:38.605 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:20:38.605 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:38.865 Cannot find device "nvmf_br" 00:20:38.865 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:20:38.865 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:38.865 Cannot find device "nvmf_init_if" 00:20:38.865 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:20:38.865 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:38.865 Cannot find device "nvmf_init_if2" 00:20:38.865 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:20:38.865 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:38.865 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:38.865 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:20:38.865 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:38.865 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:38.865 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:20:38.865 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:38.865 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:38.865 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:38.865 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:38.865 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:38.865 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:38.865 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:38.865 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:38.865 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:38.865 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:38.865 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:38.865 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:38.865 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:38.865 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:38.865 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:38.865 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:38.865 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:38.865 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:38.865 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:38.865 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:38.865 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:38.865 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:38.865 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:38.865 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:38.865 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:38.865 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:38.865 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:38.865 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:38.865 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:38.865 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:39.124 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:39.124 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:39.124 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:39.124 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:39.124 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.132 ms 00:20:39.124 00:20:39.124 --- 10.0.0.3 ping statistics --- 00:20:39.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:39.124 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:20:39.124 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:39.124 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:39.124 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.062 ms 00:20:39.124 00:20:39.124 --- 10.0.0.4 ping statistics --- 00:20:39.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:39.124 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:20:39.124 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:39.124 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:39.124 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:20:39.124 00:20:39.124 --- 10.0.0.1 ping statistics --- 00:20:39.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:39.124 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:20:39.124 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:39.124 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:39.124 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.108 ms 00:20:39.124 00:20:39.124 --- 10.0.0.2 ping statistics --- 00:20:39.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:39.124 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:20:39.124 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:39.124 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@461 -- # return 0 00:20:39.124 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:39.124 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:39.124 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:39.124 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:39.124 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:39.124 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:39.124 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:39.124 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:20:39.124 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:39.124 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:39.124 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:20:39.124 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=89518 00:20:39.124 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:20:39.124 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 89518 00:20:39.124 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 89518 ']' 00:20:39.124 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:39.124 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:39.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:39.125 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:39.125 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:39.125 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:20:39.125 [2024-11-20 11:33:24.571259] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:20:39.125 [2024-11-20 11:33:24.571394] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:39.383 [2024-11-20 11:33:24.726416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:39.383 [2024-11-20 11:33:24.765761] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:39.383 [2024-11-20 11:33:24.765832] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:39.384 [2024-11-20 11:33:24.765845] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:39.384 [2024-11-20 11:33:24.765855] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:39.384 [2024-11-20 11:33:24.765865] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:39.384 [2024-11-20 11:33:24.766899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:39.384 [2024-11-20 11:33:24.766911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:39.384 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:39.384 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:20:39.384 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:39.384 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:39.384 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:20:39.384 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:39.384 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=89518 00:20:39.384 11:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:39.642 [2024-11-20 11:33:25.180083] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:39.642 11:33:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:20:40.215 Malloc0 00:20:40.215 11:33:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:20:40.473 11:33:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:40.732 11:33:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:40.990 [2024-11-20 11:33:26.420138] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:40.990 11:33:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:20:41.248 [2024-11-20 11:33:26.688291] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:20:41.248 11:33:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=89607 00:20:41.248 11:33:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:20:41.248 11:33:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:41.248 11:33:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 89607 /var/tmp/bdevperf.sock 00:20:41.248 11:33:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 89607 ']' 00:20:41.248 11:33:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:41.248 11:33:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:41.248 11:33:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:41.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:41.248 11:33:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:41.248 11:33:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:20:41.507 11:33:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:41.507 11:33:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:20:41.507 11:33:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:20:41.766 11:33:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:20:42.333 Nvme0n1 00:20:42.333 11:33:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:20:42.591 Nvme0n1 00:20:42.591 11:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:20:42.591 11:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:20:45.123 11:33:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:20:45.123 11:33:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:20:45.123 11:33:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:20:45.381 11:33:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:20:46.315 11:33:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:20:46.315 11:33:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:46.315 11:33:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:46.315 11:33:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:46.882 11:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:46.882 11:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:20:46.882 11:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:46.882 11:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:47.141 11:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:47.141 11:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:47.141 11:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:47.141 11:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:47.399 11:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:47.399 11:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:47.399 11:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:47.399 11:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:47.965 11:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:47.965 11:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:47.965 11:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:47.965 11:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:48.222 11:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:48.222 11:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:48.222 11:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:48.222 11:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:48.788 11:33:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:48.788 11:33:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:20:48.788 11:33:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:49.046 11:33:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:20:49.304 11:33:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:20:50.258 11:33:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:20:50.259 11:33:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:20:50.259 11:33:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:50.259 11:33:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:50.823 11:33:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:50.823 11:33:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:50.823 11:33:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:50.823 11:33:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:51.081 11:33:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:51.081 11:33:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:51.081 11:33:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:51.081 11:33:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:51.646 11:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:51.646 11:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:51.646 11:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:51.646 11:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:51.939 11:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:51.939 11:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:51.939 11:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:51.939 11:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:52.196 11:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:52.196 11:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:52.196 11:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:52.196 11:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:52.455 11:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:52.455 11:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:20:52.455 11:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:53.021 11:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:20:53.280 11:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:20:54.215 11:33:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:20:54.215 11:33:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:54.215 11:33:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:54.215 11:33:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:54.474 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:54.474 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:20:54.474 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:54.474 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:55.040 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:55.040 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:55.040 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:55.040 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:55.299 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:55.299 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:55.299 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:55.299 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:55.558 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:55.558 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:55.558 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:55.558 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:55.816 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:55.816 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:55.816 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:55.817 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:56.383 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:56.383 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:20:56.384 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:56.642 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:20:56.901 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:20:57.836 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:20:57.836 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:57.836 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:57.836 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:58.402 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:58.402 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:20:58.402 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:58.402 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:58.660 11:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:58.660 11:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:58.660 11:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:58.660 11:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:58.919 11:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:58.919 11:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:58.919 11:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:58.919 11:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:59.486 11:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:59.486 11:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:59.486 11:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:59.486 11:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:59.744 11:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:59.744 11:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:20:59.744 11:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:59.744 11:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:00.060 11:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:00.060 11:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:21:00.060 11:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:21:00.318 11:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:21:00.576 11:33:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:21:01.511 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:21:01.511 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:21:01.511 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:01.511 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:01.769 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:01.769 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:01.769 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:01.769 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:02.336 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:02.336 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:02.336 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:02.336 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:02.595 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:02.595 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:02.595 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:02.595 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:02.854 11:33:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:02.854 11:33:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:21:02.854 11:33:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:02.854 11:33:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:03.112 11:33:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:03.112 11:33:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:21:03.112 11:33:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:03.112 11:33:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:03.678 11:33:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:03.678 11:33:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:21:03.678 11:33:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:21:03.934 11:33:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:21:04.192 11:33:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:21:05.126 11:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:21:05.126 11:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:21:05.126 11:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:05.126 11:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:05.707 11:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:05.707 11:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:05.707 11:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:05.707 11:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:05.966 11:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:05.966 11:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:05.966 11:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:05.966 11:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:06.225 11:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:06.225 11:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:06.225 11:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:06.225 11:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:06.484 11:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:06.484 11:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:21:06.484 11:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:06.484 11:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:06.744 11:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:06.744 11:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:06.744 11:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:06.744 11:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:07.340 11:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:07.340 11:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:21:07.340 11:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:21:07.341 11:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:21:07.908 11:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:21:08.166 11:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:21:09.101 11:33:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:21:09.101 11:33:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:09.101 11:33:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:09.101 11:33:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:09.359 11:33:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:09.359 11:33:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:09.619 11:33:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:09.619 11:33:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:09.877 11:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:09.877 11:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:09.877 11:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:09.877 11:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:10.134 11:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:10.134 11:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:10.134 11:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:10.134 11:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:10.392 11:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:10.392 11:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:10.392 11:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:10.392 11:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:10.650 11:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:10.650 11:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:10.650 11:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:10.650 11:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:11.217 11:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:11.217 11:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:21:11.217 11:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:21:11.476 11:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:21:11.735 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:21:12.671 11:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:21:12.671 11:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:21:12.671 11:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:12.671 11:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:13.238 11:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:13.238 11:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:13.238 11:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:13.238 11:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:13.497 11:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:13.497 11:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:13.497 11:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:13.497 11:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:13.756 11:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:13.756 11:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:13.756 11:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:13.756 11:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:14.014 11:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:14.014 11:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:14.271 11:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:14.271 11:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:14.529 11:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:14.530 11:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:14.530 11:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:14.530 11:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:14.788 11:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:14.788 11:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:21:14.788 11:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:21:15.047 11:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:21:15.614 11:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:21:16.551 11:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:21:16.551 11:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:16.551 11:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:16.551 11:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:16.809 11:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:16.809 11:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:16.809 11:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:16.809 11:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:17.376 11:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:17.377 11:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:17.377 11:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:17.377 11:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:17.635 11:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:17.635 11:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:17.635 11:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:17.635 11:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:17.894 11:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:17.894 11:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:17.894 11:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:17.894 11:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:18.153 11:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:18.153 11:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:18.153 11:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:18.153 11:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:18.720 11:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:18.720 11:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:21:18.720 11:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:21:18.979 11:34:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:21:19.237 11:34:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:21:20.173 11:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:21:20.173 11:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:20.173 11:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:20.173 11:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:20.738 11:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:20.738 11:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:20.738 11:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:20.738 11:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:20.997 11:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:20.997 11:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:20.997 11:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:20.997 11:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:21.256 11:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:21.256 11:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:21.256 11:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:21.256 11:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:21.514 11:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:21.514 11:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:21.514 11:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:21.514 11:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:21.772 11:34:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:21.772 11:34:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:21:21.772 11:34:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:21.772 11:34:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:22.361 11:34:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:22.361 11:34:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 89607 00:21:22.361 11:34:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 89607 ']' 00:21:22.361 11:34:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 89607 00:21:22.361 11:34:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:21:22.361 11:34:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:22.361 11:34:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89607 00:21:22.361 11:34:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:22.361 11:34:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:22.361 killing process with pid 89607 00:21:22.361 11:34:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89607' 00:21:22.361 11:34:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 89607 00:21:22.361 11:34:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 89607 00:21:22.361 { 00:21:22.361 "results": [ 00:21:22.361 { 00:21:22.361 "job": "Nvme0n1", 00:21:22.361 "core_mask": "0x4", 00:21:22.361 "workload": "verify", 00:21:22.361 "status": "terminated", 00:21:22.361 "verify_range": { 00:21:22.361 "start": 0, 00:21:22.361 "length": 16384 00:21:22.361 }, 00:21:22.361 "queue_depth": 128, 00:21:22.361 "io_size": 4096, 00:21:22.361 "runtime": 39.460104, 00:21:22.361 "iops": 8113.029808537757, 00:21:22.361 "mibps": 31.691522689600614, 00:21:22.361 "io_failed": 0, 00:21:22.361 "io_timeout": 0, 00:21:22.361 "avg_latency_us": 15743.968488066765, 00:21:22.361 "min_latency_us": 217.83272727272728, 00:21:22.361 "max_latency_us": 4087539.898181818 00:21:22.361 } 00:21:22.361 ], 00:21:22.361 "core_count": 1 00:21:22.361 } 00:21:22.361 11:34:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 89607 00:21:22.361 11:34:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:22.361 [2024-11-20 11:33:26.758305] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:21:22.361 [2024-11-20 11:33:26.758413] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89607 ] 00:21:22.361 [2024-11-20 11:33:26.900413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:22.361 [2024-11-20 11:33:26.933583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:22.361 Running I/O for 90 seconds... 00:21:22.361 8758.00 IOPS, 34.21 MiB/s [2024-11-20T11:34:07.950Z] 8909.00 IOPS, 34.80 MiB/s [2024-11-20T11:34:07.950Z] 8596.33 IOPS, 33.58 MiB/s [2024-11-20T11:34:07.950Z] 8518.50 IOPS, 33.28 MiB/s [2024-11-20T11:34:07.950Z] 8479.40 IOPS, 33.12 MiB/s [2024-11-20T11:34:07.950Z] 8333.50 IOPS, 32.55 MiB/s [2024-11-20T11:34:07.950Z] 8208.29 IOPS, 32.06 MiB/s [2024-11-20T11:34:07.950Z] 7989.00 IOPS, 31.21 MiB/s [2024-11-20T11:34:07.950Z] 8053.67 IOPS, 31.46 MiB/s [2024-11-20T11:34:07.950Z] 8114.40 IOPS, 31.70 MiB/s [2024-11-20T11:34:07.950Z] 8160.64 IOPS, 31.88 MiB/s [2024-11-20T11:34:07.950Z] 8203.83 IOPS, 32.05 MiB/s [2024-11-20T11:34:07.950Z] 8222.38 IOPS, 32.12 MiB/s [2024-11-20T11:34:07.950Z] 8276.14 IOPS, 32.33 MiB/s [2024-11-20T11:34:07.950Z] 8286.73 IOPS, 32.37 MiB/s [2024-11-20T11:34:07.950Z] 8310.06 IOPS, 32.46 MiB/s [2024-11-20T11:34:07.950Z] 8323.53 IOPS, 32.51 MiB/s [2024-11-20T11:34:07.950Z] [2024-11-20 11:33:45.755939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:119568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.361 [2024-11-20 11:33:45.756041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:22.361 [2024-11-20 11:33:45.756101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:119576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.361 [2024-11-20 11:33:45.756135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:22.361 [2024-11-20 11:33:45.756177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:119584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.361 [2024-11-20 11:33:45.756209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:22.361 [2024-11-20 11:33:45.756250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:119592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.361 [2024-11-20 11:33:45.756276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:22.361 [2024-11-20 11:33:45.756323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:119600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.361 [2024-11-20 11:33:45.756348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:22.361 [2024-11-20 11:33:45.756385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:119976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.361 [2024-11-20 11:33:45.756410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:22.361 [2024-11-20 11:33:45.756447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:119984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.361 [2024-11-20 11:33:45.756472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:22.361 [2024-11-20 11:33:45.756508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:119992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.361 [2024-11-20 11:33:45.756532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:22.361 [2024-11-20 11:33:45.756568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:120000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.361 [2024-11-20 11:33:45.756593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:22.361 [2024-11-20 11:33:45.756691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:120008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.361 [2024-11-20 11:33:45.756719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:22.361 [2024-11-20 11:33:45.756757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:120016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.361 [2024-11-20 11:33:45.756782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:22.361 [2024-11-20 11:33:45.756820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:120024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.361 [2024-11-20 11:33:45.756845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:22.361 [2024-11-20 11:33:45.756891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:120032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.361 [2024-11-20 11:33:45.756917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:22.361 [2024-11-20 11:33:45.756954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:120040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.361 [2024-11-20 11:33:45.756979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:22.361 [2024-11-20 11:33:45.757020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:120048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.361 [2024-11-20 11:33:45.757051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:22.361 [2024-11-20 11:33:45.757091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:120056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.361 [2024-11-20 11:33:45.757117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:22.361 [2024-11-20 11:33:45.757160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:120064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.361 [2024-11-20 11:33:45.757187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:22.361 [2024-11-20 11:33:45.757225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:120072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.361 [2024-11-20 11:33:45.757253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:22.361 [2024-11-20 11:33:45.757295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:120080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.362 [2024-11-20 11:33:45.757328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:22.362 [2024-11-20 11:33:45.757368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:120088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.362 [2024-11-20 11:33:45.757398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:22.362 [2024-11-20 11:33:45.757440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:120096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.362 [2024-11-20 11:33:45.757469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:22.362 [2024-11-20 11:33:45.757532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:120104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.362 [2024-11-20 11:33:45.757564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:22.362 [2024-11-20 11:33:45.757606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:120112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.362 [2024-11-20 11:33:45.757669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:22.362 [2024-11-20 11:33:45.757815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:120120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.362 [2024-11-20 11:33:45.757861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:22.362 [2024-11-20 11:33:45.757903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:120128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.362 [2024-11-20 11:33:45.757942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:22.362 [2024-11-20 11:33:45.757993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:120136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.362 [2024-11-20 11:33:45.758021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:22.362 [2024-11-20 11:33:45.758061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:120144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.362 [2024-11-20 11:33:45.758088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:22.362 [2024-11-20 11:33:45.758127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:120152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.362 [2024-11-20 11:33:45.758167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:22.362 [2024-11-20 11:33:45.758208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:120160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.362 [2024-11-20 11:33:45.758240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:22.362 [2024-11-20 11:33:45.758299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:120168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.362 [2024-11-20 11:33:45.758335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:22.362 [2024-11-20 11:33:45.758375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:120176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.362 [2024-11-20 11:33:45.758404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:22.362 [2024-11-20 11:33:45.758443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:120184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.362 [2024-11-20 11:33:45.758470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:22.362 [2024-11-20 11:33:45.758510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:120192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.362 [2024-11-20 11:33:45.758537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:22.362 [2024-11-20 11:33:45.758576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:120200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.362 [2024-11-20 11:33:45.758656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:22.362 [2024-11-20 11:33:45.758701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:120208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.362 [2024-11-20 11:33:45.758729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:22.362 [2024-11-20 11:33:45.758768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:120216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.362 [2024-11-20 11:33:45.758796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:22.362 [2024-11-20 11:33:45.758835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:120224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.362 [2024-11-20 11:33:45.758863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:22.362 [2024-11-20 11:33:45.758901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:120232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.362 [2024-11-20 11:33:45.758928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:22.362 [2024-11-20 11:33:45.758967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:120240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.362 [2024-11-20 11:33:45.758994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:22.362 [2024-11-20 11:33:45.759033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:120248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.362 [2024-11-20 11:33:45.759061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:22.362 [2024-11-20 11:33:45.759100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:120256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.362 [2024-11-20 11:33:45.759127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:22.362 [2024-11-20 11:33:45.759166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:120264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.362 [2024-11-20 11:33:45.759194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:22.362 [2024-11-20 11:33:45.759247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:120272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.362 [2024-11-20 11:33:45.759285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:22.362 [2024-11-20 11:33:45.759325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:120280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.362 [2024-11-20 11:33:45.759353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:22.362 [2024-11-20 11:33:45.759393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:120288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.362 [2024-11-20 11:33:45.759420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:22.362 [2024-11-20 11:33:45.759459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:120296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.362 [2024-11-20 11:33:45.759503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:22.362 [2024-11-20 11:33:45.759553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:120304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.362 [2024-11-20 11:33:45.759582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:22.362 [2024-11-20 11:33:45.759654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:120312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.362 [2024-11-20 11:33:45.759694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:22.362 [2024-11-20 11:33:45.759740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:120320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.363 [2024-11-20 11:33:45.759770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:22.363 [2024-11-20 11:33:45.759818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.363 [2024-11-20 11:33:45.759846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:22.363 [2024-11-20 11:33:45.759885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:120336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.363 [2024-11-20 11:33:45.759912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:22.363 [2024-11-20 11:33:45.759951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:120344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.363 [2024-11-20 11:33:45.759978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:22.363 [2024-11-20 11:33:45.760017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:120352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.363 [2024-11-20 11:33:45.760044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:22.363 [2024-11-20 11:33:45.760083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:120360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.363 [2024-11-20 11:33:45.760110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:22.363 [2024-11-20 11:33:45.760149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:120368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.363 [2024-11-20 11:33:45.760177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:22.363 [2024-11-20 11:33:45.760215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:120376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.363 [2024-11-20 11:33:45.760243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:22.363 [2024-11-20 11:33:45.760281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:120384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.363 [2024-11-20 11:33:45.760310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:22.363 [2024-11-20 11:33:45.760349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:120392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.363 [2024-11-20 11:33:45.760383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:22.363 [2024-11-20 11:33:45.760437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:120400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.363 [2024-11-20 11:33:45.760466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:22.363 [2024-11-20 11:33:45.760506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:120408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.363 [2024-11-20 11:33:45.760534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:22.363 [2024-11-20 11:33:45.762085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:120416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.363 [2024-11-20 11:33:45.762151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:22.363 [2024-11-20 11:33:45.762210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:120424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.363 [2024-11-20 11:33:45.762244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:22.363 [2024-11-20 11:33:45.762286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:120432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.363 [2024-11-20 11:33:45.762314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:22.363 [2024-11-20 11:33:45.762366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:120440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.363 [2024-11-20 11:33:45.762397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:22.363 [2024-11-20 11:33:45.762444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:119608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.363 [2024-11-20 11:33:45.762475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:22.363 [2024-11-20 11:33:45.762514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:119616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.363 [2024-11-20 11:33:45.762542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:22.363 [2024-11-20 11:33:45.762580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:119624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.363 [2024-11-20 11:33:45.762608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:22.363 [2024-11-20 11:33:45.762674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:119632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.363 [2024-11-20 11:33:45.762703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:22.363 [2024-11-20 11:33:45.762742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:119640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.363 [2024-11-20 11:33:45.762769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:22.363 [2024-11-20 11:33:45.762808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:119648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.363 [2024-11-20 11:33:45.762835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:22.363 [2024-11-20 11:33:45.762898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:119656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.363 [2024-11-20 11:33:45.762927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:22.363 [2024-11-20 11:33:45.762965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:119664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.363 [2024-11-20 11:33:45.762992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:22.363 [2024-11-20 11:33:45.763031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:119672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.363 [2024-11-20 11:33:45.763060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:22.363 [2024-11-20 11:33:45.763098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:119680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.363 [2024-11-20 11:33:45.763125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:22.363 [2024-11-20 11:33:45.763174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:119688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.363 [2024-11-20 11:33:45.763202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:22.363 [2024-11-20 11:33:45.763241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:119696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.363 [2024-11-20 11:33:45.763268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:22.363 [2024-11-20 11:33:45.763316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:119704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.363 [2024-11-20 11:33:45.763353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:22.363 [2024-11-20 11:33:45.763393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:119712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.363 [2024-11-20 11:33:45.763421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:22.363 [2024-11-20 11:33:45.763460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:119720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.363 [2024-11-20 11:33:45.763487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:22.363 [2024-11-20 11:33:45.763526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:119728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.363 [2024-11-20 11:33:45.763553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:22.363 [2024-11-20 11:33:45.763601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:119736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.363 [2024-11-20 11:33:45.763690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:22.364 [2024-11-20 11:33:45.763754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:119744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.364 [2024-11-20 11:33:45.763804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:22.364 [2024-11-20 11:33:45.763883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:120448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.364 [2024-11-20 11:33:45.763916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:22.364 [2024-11-20 11:33:45.763956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:120456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.364 [2024-11-20 11:33:45.763984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:22.364 [2024-11-20 11:33:45.764024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:120464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.364 [2024-11-20 11:33:45.764051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:22.364 [2024-11-20 11:33:45.764089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:120472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.364 [2024-11-20 11:33:45.764117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:22.364 [2024-11-20 11:33:45.764155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:120480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.364 [2024-11-20 11:33:45.764183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:22.364 [2024-11-20 11:33:45.764228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:120488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.364 [2024-11-20 11:33:45.764254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:22.364 [2024-11-20 11:33:45.764294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:120496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.364 [2024-11-20 11:33:45.764322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:22.364 [2024-11-20 11:33:45.764361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:120504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.364 [2024-11-20 11:33:45.764388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:22.364 [2024-11-20 11:33:45.764427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:120512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.364 [2024-11-20 11:33:45.764454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:22.364 [2024-11-20 11:33:45.764493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:120520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.364 [2024-11-20 11:33:45.764520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:22.364 [2024-11-20 11:33:45.764559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:120528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.364 [2024-11-20 11:33:45.764586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:22.364 [2024-11-20 11:33:45.764650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:120536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.364 [2024-11-20 11:33:45.764682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:22.364 [2024-11-20 11:33:45.764722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:119752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.364 [2024-11-20 11:33:45.764765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:22.364 [2024-11-20 11:33:45.764822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:119760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.364 [2024-11-20 11:33:45.764859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:22.364 [2024-11-20 11:33:45.764899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:119768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.364 [2024-11-20 11:33:45.764926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:22.364 [2024-11-20 11:33:45.764965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:119776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.364 [2024-11-20 11:33:45.765007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:22.364 [2024-11-20 11:33:45.765060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:119784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.364 [2024-11-20 11:33:45.765089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:22.364 [2024-11-20 11:33:45.765128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:119792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.364 [2024-11-20 11:33:45.765155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:22.364 [2024-11-20 11:33:45.765194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:119800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.364 [2024-11-20 11:33:45.765221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:22.364 [2024-11-20 11:33:45.765260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:119808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.364 [2024-11-20 11:33:45.765287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:22.364 [2024-11-20 11:33:45.765326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:119816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.364 [2024-11-20 11:33:45.765353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:22.364 [2024-11-20 11:33:45.765401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:119824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.364 [2024-11-20 11:33:45.765428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:22.364 [2024-11-20 11:33:45.765467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:119832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.364 [2024-11-20 11:33:45.765494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:22.364 [2024-11-20 11:33:45.765534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:119840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.364 [2024-11-20 11:33:45.765561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:22.364 [2024-11-20 11:33:45.765600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:119848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.364 [2024-11-20 11:33:45.765691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:22.364 [2024-11-20 11:33:45.765736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:119856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.364 [2024-11-20 11:33:45.765764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:22.364 [2024-11-20 11:33:45.765803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:119864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.364 [2024-11-20 11:33:45.765830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:22.364 [2024-11-20 11:33:45.765869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:119872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.364 [2024-11-20 11:33:45.765895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:22.365 [2024-11-20 11:33:45.765949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:119880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.365 [2024-11-20 11:33:45.765969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:22.365 [2024-11-20 11:33:45.765997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:119888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.365 [2024-11-20 11:33:45.766017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:22.365 [2024-11-20 11:33:45.766049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:119896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.365 [2024-11-20 11:33:45.766075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:22.365 [2024-11-20 11:33:45.766104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:119904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.365 [2024-11-20 11:33:45.766123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:22.365 [2024-11-20 11:33:45.766151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:119912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.365 [2024-11-20 11:33:45.766170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:22.365 [2024-11-20 11:33:45.766197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:119920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.365 [2024-11-20 11:33:45.766216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:22.365 [2024-11-20 11:33:45.766243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:119928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.365 [2024-11-20 11:33:45.766265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:22.365 [2024-11-20 11:33:45.766298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:119936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.365 [2024-11-20 11:33:45.766318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:22.365 [2024-11-20 11:33:45.766345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:119944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.365 [2024-11-20 11:33:45.766374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:22.365 [2024-11-20 11:33:45.766402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:119952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.365 [2024-11-20 11:33:45.766422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:22.365 [2024-11-20 11:33:45.766449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:119960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.365 [2024-11-20 11:33:45.766468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:22.365 [2024-11-20 11:33:45.766495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:119968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.365 [2024-11-20 11:33:45.766514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:22.365 [2024-11-20 11:33:45.766541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:120544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.365 [2024-11-20 11:33:45.766560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:22.365 [2024-11-20 11:33:45.766587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:120552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.365 [2024-11-20 11:33:45.766606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.365 [2024-11-20 11:33:45.766646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:120560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.365 [2024-11-20 11:33:45.766670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:22.365 [2024-11-20 11:33:45.766697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:120568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.365 [2024-11-20 11:33:45.766717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:22.365 [2024-11-20 11:33:45.766744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:120576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.365 [2024-11-20 11:33:45.766763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:22.365 [2024-11-20 11:33:45.766790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:120584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.365 [2024-11-20 11:33:45.766810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:22.365 [2024-11-20 11:33:45.766836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:119568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.365 [2024-11-20 11:33:45.766855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:22.365 [2024-11-20 11:33:45.766883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:119576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.365 [2024-11-20 11:33:45.766902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:22.365 [2024-11-20 11:33:45.766929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:119584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.365 [2024-11-20 11:33:45.766949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:22.365 [2024-11-20 11:33:45.766985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.365 [2024-11-20 11:33:45.767006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:22.365 [2024-11-20 11:33:45.767032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:119600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.365 [2024-11-20 11:33:45.767052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:22.365 [2024-11-20 11:33:45.767079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:119976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.365 [2024-11-20 11:33:45.767099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:22.365 [2024-11-20 11:33:45.768150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:119984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.365 [2024-11-20 11:33:45.768187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:22.365 [2024-11-20 11:33:45.768222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:119992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.365 [2024-11-20 11:33:45.768244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:22.365 [2024-11-20 11:33:45.768271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:120000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.365 [2024-11-20 11:33:45.768291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:22.365 [2024-11-20 11:33:45.768318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:120008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.365 [2024-11-20 11:33:45.768337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:22.365 [2024-11-20 11:33:45.768364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:120016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.365 [2024-11-20 11:33:45.768383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:22.365 [2024-11-20 11:33:45.768410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:120024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.365 [2024-11-20 11:33:45.768429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:22.365 [2024-11-20 11:33:45.768463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:120032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.365 [2024-11-20 11:33:45.768483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:22.366 [2024-11-20 11:33:45.768509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:120040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.366 [2024-11-20 11:33:45.768529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:22.366 [2024-11-20 11:33:45.768559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:120048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.366 [2024-11-20 11:33:45.768595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:22.366 [2024-11-20 11:33:45.768667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:120056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.366 [2024-11-20 11:33:45.768691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:22.366 [2024-11-20 11:33:45.768718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:120064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.366 [2024-11-20 11:33:45.768738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:22.366 [2024-11-20 11:33:45.768764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:120072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.366 [2024-11-20 11:33:45.768784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:22.366 [2024-11-20 11:33:45.768825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:120080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.366 [2024-11-20 11:33:45.768848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:22.366 [2024-11-20 11:33:45.768875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:120088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.366 [2024-11-20 11:33:45.768894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:22.366 [2024-11-20 11:33:45.768922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:120096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.366 [2024-11-20 11:33:45.768941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:22.366 [2024-11-20 11:33:45.768968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:120104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.366 [2024-11-20 11:33:45.768988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:22.366 [2024-11-20 11:33:45.769014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:120112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.366 [2024-11-20 11:33:45.769034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:22.366 [2024-11-20 11:33:45.769060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:120120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.366 [2024-11-20 11:33:45.769080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:22.366 [2024-11-20 11:33:45.769107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:120128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.366 [2024-11-20 11:33:45.769126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:22.366 [2024-11-20 11:33:45.769152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:120136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.366 [2024-11-20 11:33:45.769172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:22.366 [2024-11-20 11:33:45.769199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:120144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.366 [2024-11-20 11:33:45.769218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:22.366 [2024-11-20 11:33:45.769245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:120152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.366 [2024-11-20 11:33:45.769274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:22.366 [2024-11-20 11:33:45.769306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:120160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.366 [2024-11-20 11:33:45.769326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:22.366 [2024-11-20 11:33:45.769353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:120168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.366 [2024-11-20 11:33:45.769372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:22.366 [2024-11-20 11:33:45.769399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:120176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.366 [2024-11-20 11:33:45.769419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:22.366 [2024-11-20 11:33:45.769446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:120184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.366 [2024-11-20 11:33:45.769465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:22.366 [2024-11-20 11:33:45.769492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:120192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.366 [2024-11-20 11:33:45.769512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:22.366 [2024-11-20 11:33:45.769538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:120200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.366 [2024-11-20 11:33:45.769558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:22.366 [2024-11-20 11:33:45.769585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:120208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.366 [2024-11-20 11:33:45.769604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:22.366 [2024-11-20 11:33:45.769666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:120216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.366 [2024-11-20 11:33:45.769699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:22.366 [2024-11-20 11:33:45.769726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:120224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.366 [2024-11-20 11:33:45.769745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:22.366 [2024-11-20 11:33:45.769772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:120232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.366 [2024-11-20 11:33:45.769791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:22.367 [2024-11-20 11:33:45.769818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:120240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.367 [2024-11-20 11:33:45.769837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:22.367 [2024-11-20 11:33:45.769864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:120248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.367 [2024-11-20 11:33:45.769895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:22.367 [2024-11-20 11:33:45.769928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:120256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.367 [2024-11-20 11:33:45.769957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:22.367 [2024-11-20 11:33:45.769986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:120264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.367 [2024-11-20 11:33:45.770006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:22.367 [2024-11-20 11:33:45.770033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:120272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.367 [2024-11-20 11:33:45.770052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:22.367 [2024-11-20 11:33:45.770079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:120280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.367 [2024-11-20 11:33:45.770101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:22.367 [2024-11-20 11:33:45.770134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:120288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.367 [2024-11-20 11:33:45.770160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:22.367 [2024-11-20 11:33:45.770188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:120296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.367 [2024-11-20 11:33:45.770208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:22.367 [2024-11-20 11:33:45.770235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:120304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.367 [2024-11-20 11:33:45.770254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:22.367 [2024-11-20 11:33:45.770281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:120312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.367 [2024-11-20 11:33:45.770300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:22.367 [2024-11-20 11:33:45.770327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:120320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.367 [2024-11-20 11:33:45.770346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:22.367 [2024-11-20 11:33:45.770372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:120328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.367 [2024-11-20 11:33:45.770392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:22.367 [2024-11-20 11:33:45.770418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:120336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.367 [2024-11-20 11:33:45.770437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:22.367 [2024-11-20 11:33:45.770464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:120344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.367 [2024-11-20 11:33:45.770483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:22.367 [2024-11-20 11:33:45.770524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:120352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.367 [2024-11-20 11:33:45.770545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:22.367 [2024-11-20 11:33:45.770571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:120360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.367 [2024-11-20 11:33:45.770591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:22.367 [2024-11-20 11:33:45.770633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:120368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.367 [2024-11-20 11:33:45.770656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:22.367 [2024-11-20 11:33:45.770683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:120376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.367 [2024-11-20 11:33:45.770703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:22.367 [2024-11-20 11:33:45.770729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:120384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.367 [2024-11-20 11:33:45.770749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:22.367 [2024-11-20 11:33:45.770776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:120392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.367 [2024-11-20 11:33:45.770795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:22.367 [2024-11-20 11:33:45.770823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:120400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.367 [2024-11-20 11:33:45.770843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:22.367 [2024-11-20 11:33:45.771678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:120408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.367 [2024-11-20 11:33:45.771714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:22.367 [2024-11-20 11:33:45.771748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:120416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.367 [2024-11-20 11:33:45.771770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:22.367 [2024-11-20 11:33:45.771797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.367 [2024-11-20 11:33:45.771817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:22.367 [2024-11-20 11:33:45.771844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:120432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.367 [2024-11-20 11:33:45.771863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:22.367 [2024-11-20 11:33:45.771890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:120440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.367 [2024-11-20 11:33:45.771910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:22.367 [2024-11-20 11:33:45.771952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:119608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.367 [2024-11-20 11:33:45.771972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:22.367 [2024-11-20 11:33:45.771999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:119616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.367 [2024-11-20 11:33:45.772019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:22.367 [2024-11-20 11:33:45.772045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:119624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.367 [2024-11-20 11:33:45.772064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:22.367 [2024-11-20 11:33:45.772091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:119632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.367 [2024-11-20 11:33:45.772110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:22.367 [2024-11-20 11:33:45.772137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:119640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.367 [2024-11-20 11:33:45.772155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:22.367 [2024-11-20 11:33:45.772182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:119648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.367 [2024-11-20 11:33:45.772201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:22.367 [2024-11-20 11:33:45.772228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:119656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.367 [2024-11-20 11:33:45.772247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:22.368 [2024-11-20 11:33:45.772274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:119664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.368 [2024-11-20 11:33:45.772293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:22.368 [2024-11-20 11:33:45.772319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:119672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.368 [2024-11-20 11:33:45.772339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:22.368 [2024-11-20 11:33:45.772366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:119680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.368 [2024-11-20 11:33:45.772385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:22.368 [2024-11-20 11:33:45.772411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:119688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.368 [2024-11-20 11:33:45.772438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:22.368 [2024-11-20 11:33:45.772478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:119696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.368 [2024-11-20 11:33:45.772502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:22.368 [2024-11-20 11:33:45.772540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:119704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.368 [2024-11-20 11:33:45.772561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:22.368 [2024-11-20 11:33:45.772588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:119712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.368 [2024-11-20 11:33:45.772607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:22.368 [2024-11-20 11:33:45.772662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:119720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.368 [2024-11-20 11:33:45.772684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:22.368 [2024-11-20 11:33:45.772711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:119728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.368 [2024-11-20 11:33:45.772730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:22.368 [2024-11-20 11:33:45.772757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:119736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.368 [2024-11-20 11:33:45.772776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:22.368 [2024-11-20 11:33:45.772803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:119744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.368 [2024-11-20 11:33:45.772822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:22.368 [2024-11-20 11:33:45.772849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:120448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.368 [2024-11-20 11:33:45.772868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:22.368 [2024-11-20 11:33:45.772894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:120456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.368 [2024-11-20 11:33:45.772914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:22.368 [2024-11-20 11:33:45.772941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:120464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.368 [2024-11-20 11:33:45.772960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:22.368 [2024-11-20 11:33:45.772987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:120472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.368 [2024-11-20 11:33:45.773006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:22.368 [2024-11-20 11:33:45.773033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:120480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.368 [2024-11-20 11:33:45.773052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:22.368 [2024-11-20 11:33:45.773079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:120488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.368 [2024-11-20 11:33:45.773099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:22.368 [2024-11-20 11:33:45.773125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:120496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.368 [2024-11-20 11:33:45.773157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:22.368 [2024-11-20 11:33:45.773185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:120504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.368 [2024-11-20 11:33:45.773205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:22.368 [2024-11-20 11:33:45.773232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:120512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.368 [2024-11-20 11:33:45.773250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:22.368 [2024-11-20 11:33:45.773288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:120520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.368 [2024-11-20 11:33:45.773307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:22.368 [2024-11-20 11:33:45.773334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:120528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.368 [2024-11-20 11:33:45.773353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:22.368 [2024-11-20 11:33:45.773380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:120536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.368 [2024-11-20 11:33:45.773399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:22.368 [2024-11-20 11:33:45.773426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:119752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.368 [2024-11-20 11:33:45.773445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:22.368 [2024-11-20 11:33:45.773472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:119760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.368 [2024-11-20 11:33:45.773491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:22.368 [2024-11-20 11:33:45.773518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:119768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.368 [2024-11-20 11:33:45.773536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:22.368 [2024-11-20 11:33:45.773563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:119776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.368 [2024-11-20 11:33:45.773582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:22.368 [2024-11-20 11:33:45.773609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:119784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.368 [2024-11-20 11:33:45.773668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:22.368 [2024-11-20 11:33:45.773700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:119792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.368 [2024-11-20 11:33:45.773720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:22.368 [2024-11-20 11:33:45.773747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:119800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.368 [2024-11-20 11:33:45.773777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:22.368 [2024-11-20 11:33:45.773805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:119808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.368 [2024-11-20 11:33:45.773824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:22.368 [2024-11-20 11:33:45.773860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:119816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.369 [2024-11-20 11:33:45.773879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:22.369 [2024-11-20 11:33:45.773910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:119824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.369 [2024-11-20 11:33:45.773932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:22.369 [2024-11-20 11:33:45.773959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:119832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.369 [2024-11-20 11:33:45.773979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:22.369 [2024-11-20 11:33:45.774006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:119840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.369 [2024-11-20 11:33:45.774025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:22.369 [2024-11-20 11:33:45.774052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:119848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.369 [2024-11-20 11:33:45.774071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:22.369 [2024-11-20 11:33:45.774097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:119856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.369 [2024-11-20 11:33:45.774116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:22.369 [2024-11-20 11:33:45.774143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:119864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.369 [2024-11-20 11:33:45.774162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:22.369 [2024-11-20 11:33:45.774189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:119872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.369 [2024-11-20 11:33:45.774208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:22.369 [2024-11-20 11:33:45.774235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:119880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.369 [2024-11-20 11:33:45.774254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:22.369 [2024-11-20 11:33:45.774280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:119888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.369 [2024-11-20 11:33:45.774299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:22.369 [2024-11-20 11:33:45.774326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:119896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.369 [2024-11-20 11:33:45.774345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:22.369 [2024-11-20 11:33:45.774382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:119904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.369 [2024-11-20 11:33:45.774402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:22.369 [2024-11-20 11:33:45.774429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:119912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.369 [2024-11-20 11:33:45.774448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:22.369 [2024-11-20 11:33:45.774475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:119920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.369 [2024-11-20 11:33:45.774494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:22.369 [2024-11-20 11:33:45.774520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:119928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.369 [2024-11-20 11:33:45.774539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:22.369 [2024-11-20 11:33:45.774565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:119936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.369 [2024-11-20 11:33:45.774584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:22.369 [2024-11-20 11:33:45.774611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:119944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.369 [2024-11-20 11:33:45.774647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:22.369 [2024-11-20 11:33:45.774675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:119952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.369 [2024-11-20 11:33:45.774695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:22.369 [2024-11-20 11:33:45.774722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:119960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.369 [2024-11-20 11:33:45.774741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:22.369 [2024-11-20 11:33:45.774768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:119968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.369 [2024-11-20 11:33:45.774787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:22.369 [2024-11-20 11:33:45.774813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:120544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.369 [2024-11-20 11:33:45.774833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:22.369 [2024-11-20 11:33:45.774860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:120552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.369 [2024-11-20 11:33:45.774882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.369 [2024-11-20 11:33:45.774914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:120560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.369 [2024-11-20 11:33:45.774937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:22.369 [2024-11-20 11:33:45.774974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:120568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.369 [2024-11-20 11:33:45.775006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:22.369 [2024-11-20 11:33:45.775033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:120576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.369 [2024-11-20 11:33:45.775052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:22.369 [2024-11-20 11:33:45.775082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:120584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.369 [2024-11-20 11:33:45.775105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:22.369 [2024-11-20 11:33:45.775133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:119568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.369 [2024-11-20 11:33:45.775152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:22.369 [2024-11-20 11:33:45.775179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:119576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.369 [2024-11-20 11:33:45.775198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:22.369 [2024-11-20 11:33:45.775224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:119584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.369 [2024-11-20 11:33:45.775244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:22.369 [2024-11-20 11:33:45.775271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:119592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.369 [2024-11-20 11:33:45.775290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:22.369 [2024-11-20 11:33:45.775317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:119600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.369 [2024-11-20 11:33:45.775336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:22.369 [2024-11-20 11:33:45.776307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:119976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.369 [2024-11-20 11:33:45.776341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:22.369 [2024-11-20 11:33:45.776374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:119984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.370 [2024-11-20 11:33:45.776396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:22.370 [2024-11-20 11:33:45.776424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:119992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.370 [2024-11-20 11:33:45.776444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:22.370 [2024-11-20 11:33:45.776471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:120000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.370 [2024-11-20 11:33:45.776490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:22.370 [2024-11-20 11:33:45.776531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:120008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.370 [2024-11-20 11:33:45.776553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:22.370 [2024-11-20 11:33:45.776580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:120016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.370 [2024-11-20 11:33:45.776600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:22.370 [2024-11-20 11:33:45.776648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:120024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.370 [2024-11-20 11:33:45.776680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:22.370 [2024-11-20 11:33:45.776707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:120032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.370 [2024-11-20 11:33:45.776726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:22.370 [2024-11-20 11:33:45.776753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:120040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.370 [2024-11-20 11:33:45.776772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:22.370 [2024-11-20 11:33:45.776799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:120048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.370 [2024-11-20 11:33:45.776818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:22.370 [2024-11-20 11:33:45.776845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:120056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.370 [2024-11-20 11:33:45.776864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:22.370 [2024-11-20 11:33:45.776891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:120064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.370 [2024-11-20 11:33:45.776910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:22.370 [2024-11-20 11:33:45.776937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:120072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.370 [2024-11-20 11:33:45.776956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:22.370 [2024-11-20 11:33:45.776983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:120080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.370 [2024-11-20 11:33:45.777002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:22.370 [2024-11-20 11:33:45.777029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:120088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.370 [2024-11-20 11:33:45.777048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:22.370 [2024-11-20 11:33:45.777074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:120096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.370 [2024-11-20 11:33:45.777094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:22.370 [2024-11-20 11:33:45.777127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:120104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.370 [2024-11-20 11:33:45.777161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:22.370 [2024-11-20 11:33:45.777189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:120112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.370 [2024-11-20 11:33:45.777209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:22.370 [2024-11-20 11:33:45.777247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:120120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.370 [2024-11-20 11:33:45.777265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:22.370 [2024-11-20 11:33:45.777304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:120128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.370 [2024-11-20 11:33:45.777333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:22.370 [2024-11-20 11:33:45.777362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:120136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.370 [2024-11-20 11:33:45.777382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:22.370 [2024-11-20 11:33:45.777409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:120144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.370 [2024-11-20 11:33:45.777428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:22.370 [2024-11-20 11:33:45.777454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:120152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.370 [2024-11-20 11:33:45.777474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:22.370 [2024-11-20 11:33:45.777500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:120160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.370 [2024-11-20 11:33:45.777523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:22.370 [2024-11-20 11:33:45.777550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:120168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.370 [2024-11-20 11:33:45.777570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:22.370 [2024-11-20 11:33:45.777596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:120176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.370 [2024-11-20 11:33:45.777648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:22.370 [2024-11-20 11:33:45.777682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:120184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.370 [2024-11-20 11:33:45.777702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:22.370 [2024-11-20 11:33:45.777730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:120192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.370 [2024-11-20 11:33:45.777749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:22.370 [2024-11-20 11:33:45.777782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:120200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.370 [2024-11-20 11:33:45.777815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:22.370 [2024-11-20 11:33:45.777843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:120208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.370 [2024-11-20 11:33:45.777862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:22.370 [2024-11-20 11:33:45.777889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:120216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.370 [2024-11-20 11:33:45.777908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:22.370 [2024-11-20 11:33:45.777935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:120224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.370 [2024-11-20 11:33:45.777954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:22.370 [2024-11-20 11:33:45.777981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:120232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.370 [2024-11-20 11:33:45.778000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:22.370 [2024-11-20 11:33:45.778027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:120240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.370 [2024-11-20 11:33:45.778046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:22.371 [2024-11-20 11:33:45.778073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:120248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.371 [2024-11-20 11:33:45.778092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:22.371 [2024-11-20 11:33:45.778119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:120256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.371 [2024-11-20 11:33:45.778138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:22.371 [2024-11-20 11:33:45.778164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:120264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.371 [2024-11-20 11:33:45.778192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:22.371 [2024-11-20 11:33:45.778221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:120272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.371 [2024-11-20 11:33:45.778240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:22.371 [2024-11-20 11:33:45.778267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:120280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.371 [2024-11-20 11:33:45.778286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:22.371 [2024-11-20 11:33:45.778313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:120288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.371 [2024-11-20 11:33:45.778335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:22.371 [2024-11-20 11:33:45.778362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:120296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.371 [2024-11-20 11:33:45.778391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:22.371 [2024-11-20 11:33:45.778446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:120304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.371 [2024-11-20 11:33:45.778470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:22.371 [2024-11-20 11:33:45.778498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:120312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.371 [2024-11-20 11:33:45.778518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:22.371 [2024-11-20 11:33:45.778544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:120320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.371 [2024-11-20 11:33:45.778564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:22.371 [2024-11-20 11:33:45.778590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:120328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.371 [2024-11-20 11:33:45.778610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:22.371 [2024-11-20 11:33:45.778654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:120336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.371 [2024-11-20 11:33:45.778674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:22.371 [2024-11-20 11:33:45.778701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:120344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.371 [2024-11-20 11:33:45.778720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:22.371 [2024-11-20 11:33:45.778747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:120352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.371 [2024-11-20 11:33:45.778766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:22.371 [2024-11-20 11:33:45.778793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:120360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.371 [2024-11-20 11:33:45.778812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:22.371 [2024-11-20 11:33:45.778839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:120368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.371 [2024-11-20 11:33:45.778858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:22.371 [2024-11-20 11:33:45.778884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:120376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.371 [2024-11-20 11:33:45.778904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:22.371 [2024-11-20 11:33:45.778931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:120384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.371 [2024-11-20 11:33:45.778950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:22.371 [2024-11-20 11:33:45.778978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:120392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.371 [2024-11-20 11:33:45.778997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:22.371 [2024-11-20 11:33:45.779876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:120400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.371 [2024-11-20 11:33:45.779911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:22.371 [2024-11-20 11:33:45.779944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:120408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.371 [2024-11-20 11:33:45.779966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:22.371 [2024-11-20 11:33:45.779994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:120416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.371 [2024-11-20 11:33:45.780015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:22.371 [2024-11-20 11:33:45.780042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:120424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.371 [2024-11-20 11:33:45.780061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:22.371 [2024-11-20 11:33:45.780087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:120432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.371 [2024-11-20 11:33:45.780107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:22.371 [2024-11-20 11:33:45.780133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:120440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.371 [2024-11-20 11:33:45.780152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:22.371 [2024-11-20 11:33:45.780179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:119608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.371 [2024-11-20 11:33:45.780198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:22.371 [2024-11-20 11:33:45.780225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:119616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.371 [2024-11-20 11:33:45.780244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:22.371 [2024-11-20 11:33:45.780270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:119624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.371 [2024-11-20 11:33:45.780289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:22.371 [2024-11-20 11:33:45.780318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:119632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.372 [2024-11-20 11:33:45.780344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:22.372 [2024-11-20 11:33:45.780372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:119640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.372 [2024-11-20 11:33:45.780391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:22.372 [2024-11-20 11:33:45.780418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:119648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.372 [2024-11-20 11:33:45.780437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:22.372 [2024-11-20 11:33:45.780478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:119656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.372 [2024-11-20 11:33:45.780499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:22.372 [2024-11-20 11:33:45.780526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:119664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.372 [2024-11-20 11:33:45.780545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:22.372 [2024-11-20 11:33:45.780581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:119672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.372 [2024-11-20 11:33:45.780628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:22.372 [2024-11-20 11:33:45.780662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:119680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.372 [2024-11-20 11:33:45.780681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:22.372 [2024-11-20 11:33:45.780708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:119688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.372 [2024-11-20 11:33:45.780727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:22.372 [2024-11-20 11:33:45.780753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:119696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.372 [2024-11-20 11:33:45.780773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:22.372 [2024-11-20 11:33:45.780799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:119704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.372 [2024-11-20 11:33:45.780818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:22.372 [2024-11-20 11:33:45.780845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:119712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.372 [2024-11-20 11:33:45.780864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:22.372 [2024-11-20 11:33:45.780891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:119720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.372 [2024-11-20 11:33:45.780910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:22.372 [2024-11-20 11:33:45.780936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:119728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.372 [2024-11-20 11:33:45.780955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:22.372 [2024-11-20 11:33:45.780982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:119736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.372 [2024-11-20 11:33:45.781001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:22.372 [2024-11-20 11:33:45.781028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:119744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.372 [2024-11-20 11:33:45.781046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:22.372 [2024-11-20 11:33:45.781073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:120448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.372 [2024-11-20 11:33:45.781103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:22.372 [2024-11-20 11:33:45.781131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:120456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.372 [2024-11-20 11:33:45.781151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:22.372 [2024-11-20 11:33:45.781177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:120464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.372 [2024-11-20 11:33:45.781196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:22.372 [2024-11-20 11:33:45.781233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:120472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.372 [2024-11-20 11:33:45.781252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:22.372 [2024-11-20 11:33:45.781278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:120480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.372 [2024-11-20 11:33:45.781297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:22.372 [2024-11-20 11:33:45.781324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:120488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.372 [2024-11-20 11:33:45.781343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:22.372 [2024-11-20 11:33:45.781375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:120496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.372 [2024-11-20 11:33:45.781405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:22.372 [2024-11-20 11:33:45.781434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:120504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.372 [2024-11-20 11:33:45.781453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:22.372 [2024-11-20 11:33:45.781480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:120512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.372 [2024-11-20 11:33:45.781499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:22.372 [2024-11-20 11:33:45.781526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:120520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.372 [2024-11-20 11:33:45.781545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:22.372 [2024-11-20 11:33:45.781572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:120528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.372 [2024-11-20 11:33:45.781591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:22.372 [2024-11-20 11:33:45.781645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:120536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.372 [2024-11-20 11:33:45.781671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:22.373 [2024-11-20 11:33:45.781699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:119752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.373 [2024-11-20 11:33:45.781730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:22.373 [2024-11-20 11:33:45.781768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:119760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.373 [2024-11-20 11:33:45.781799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:22.373 [2024-11-20 11:33:45.781833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:119768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.373 [2024-11-20 11:33:45.781853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:22.373 [2024-11-20 11:33:45.781880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:119776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.373 [2024-11-20 11:33:45.781899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:22.373 [2024-11-20 11:33:45.781925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:119784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.373 [2024-11-20 11:33:45.781944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:22.373 [2024-11-20 11:33:45.781971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:119792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.373 [2024-11-20 11:33:45.781990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:22.373 [2024-11-20 11:33:45.782017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:119800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.373 [2024-11-20 11:33:45.782036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:22.373 [2024-11-20 11:33:45.782062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:119808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.373 [2024-11-20 11:33:45.782081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:22.373 [2024-11-20 11:33:45.782108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:119816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.373 [2024-11-20 11:33:45.782127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:22.373 [2024-11-20 11:33:45.782153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:119824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.373 [2024-11-20 11:33:45.782172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:22.373 [2024-11-20 11:33:45.782202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:119832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.373 [2024-11-20 11:33:45.782222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:22.373 [2024-11-20 11:33:45.782248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:119840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.373 [2024-11-20 11:33:45.782267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:22.373 [2024-11-20 11:33:45.782294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:119848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.373 [2024-11-20 11:33:45.782313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:22.373 [2024-11-20 11:33:45.782353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:119856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.373 [2024-11-20 11:33:45.782374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:22.373 [2024-11-20 11:33:45.782400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:119864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.373 [2024-11-20 11:33:45.782420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:22.373 [2024-11-20 11:33:45.782447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:119872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.373 [2024-11-20 11:33:45.782466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:22.373 [2024-11-20 11:33:45.782494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:119880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.373 [2024-11-20 11:33:45.782513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:22.373 [2024-11-20 11:33:45.782539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:119888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.373 [2024-11-20 11:33:45.782564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:22.373 [2024-11-20 11:33:45.782595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:119896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.373 [2024-11-20 11:33:45.782628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:22.373 [2024-11-20 11:33:45.782659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:119904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.373 [2024-11-20 11:33:45.782679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:22.373 [2024-11-20 11:33:45.782706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:119912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.373 [2024-11-20 11:33:45.782726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:22.373 [2024-11-20 11:33:45.782752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:119920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.373 [2024-11-20 11:33:45.782772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:22.373 [2024-11-20 11:33:45.782810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:119928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.373 [2024-11-20 11:33:45.782829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:22.373 [2024-11-20 11:33:45.782856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:119936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.373 [2024-11-20 11:33:45.782875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:22.373 [2024-11-20 11:33:45.782902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:119944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.373 [2024-11-20 11:33:45.782921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:22.373 [2024-11-20 11:33:45.782973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:119952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.373 [2024-11-20 11:33:45.782994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:22.373 [2024-11-20 11:33:45.783025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:119960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.373 [2024-11-20 11:33:45.783045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:22.373 [2024-11-20 11:33:45.791454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:119968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.373 [2024-11-20 11:33:45.791506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:22.373 [2024-11-20 11:33:45.791541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:120544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.373 [2024-11-20 11:33:45.791568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:22.373 [2024-11-20 11:33:45.791600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:120552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.373 [2024-11-20 11:33:45.791651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.373 [2024-11-20 11:33:45.791684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:120560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.373 [2024-11-20 11:33:45.791704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:22.373 [2024-11-20 11:33:45.791731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:120568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.373 [2024-11-20 11:33:45.791750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:22.373 [2024-11-20 11:33:45.791777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:120576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.373 [2024-11-20 11:33:45.791797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:22.374 [2024-11-20 11:33:45.791824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:120584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.374 [2024-11-20 11:33:45.791843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:22.374 [2024-11-20 11:33:45.791875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:119568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.374 [2024-11-20 11:33:45.791895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:22.374 [2024-11-20 11:33:45.791922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.374 [2024-11-20 11:33:45.791941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:22.374 [2024-11-20 11:33:45.791968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:119584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.374 [2024-11-20 11:33:45.791986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:22.374 [2024-11-20 11:33:45.792033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:119592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.374 [2024-11-20 11:33:45.792062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:22.374 [2024-11-20 11:33:45.793154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:119600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.374 [2024-11-20 11:33:45.793192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:22.374 [2024-11-20 11:33:45.793229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:119976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.374 [2024-11-20 11:33:45.793251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:22.374 [2024-11-20 11:33:45.793279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:119984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.374 [2024-11-20 11:33:45.793298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:22.374 [2024-11-20 11:33:45.793326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:119992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.374 [2024-11-20 11:33:45.793345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:22.374 [2024-11-20 11:33:45.793373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:120000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.374 [2024-11-20 11:33:45.793393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:22.374 [2024-11-20 11:33:45.793420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:120008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.374 [2024-11-20 11:33:45.793440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:22.374 [2024-11-20 11:33:45.793467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:120016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.374 [2024-11-20 11:33:45.793486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:22.374 [2024-11-20 11:33:45.793513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:120024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.374 [2024-11-20 11:33:45.793533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:22.374 [2024-11-20 11:33:45.793559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:120032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.374 [2024-11-20 11:33:45.793579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:22.374 [2024-11-20 11:33:45.793605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:120040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.374 [2024-11-20 11:33:45.793664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:22.374 [2024-11-20 11:33:45.793695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:120048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.374 [2024-11-20 11:33:45.793715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:22.374 [2024-11-20 11:33:45.793743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:120056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.374 [2024-11-20 11:33:45.793779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:22.374 [2024-11-20 11:33:45.793807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:120064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.374 [2024-11-20 11:33:45.793827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:22.374 [2024-11-20 11:33:45.793854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:120072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.374 [2024-11-20 11:33:45.793873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:22.374 [2024-11-20 11:33:45.793900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:120080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.374 [2024-11-20 11:33:45.793919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:22.374 [2024-11-20 11:33:45.793945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:120088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.374 [2024-11-20 11:33:45.793964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:22.374 [2024-11-20 11:33:45.793991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:120096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.374 [2024-11-20 11:33:45.794011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:22.374 [2024-11-20 11:33:45.794038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:120104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.374 [2024-11-20 11:33:45.794057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:22.374 [2024-11-20 11:33:45.794083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:120112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.374 [2024-11-20 11:33:45.794102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:22.374 [2024-11-20 11:33:45.794139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:120120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.374 [2024-11-20 11:33:45.794158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:22.374 [2024-11-20 11:33:45.794185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:120128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.374 [2024-11-20 11:33:45.794205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:22.374 [2024-11-20 11:33:45.794231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:120136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.374 [2024-11-20 11:33:45.794250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:22.374 [2024-11-20 11:33:45.794279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:120144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.374 [2024-11-20 11:33:45.794306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:22.374 [2024-11-20 11:33:45.794334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:120152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.374 [2024-11-20 11:33:45.794364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:22.374 [2024-11-20 11:33:45.794400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:120160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.374 [2024-11-20 11:33:45.794421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:22.374 [2024-11-20 11:33:45.794448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:120168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.374 [2024-11-20 11:33:45.794467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:22.375 [2024-11-20 11:33:45.794504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:120176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.375 [2024-11-20 11:33:45.794527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:22.375 [2024-11-20 11:33:45.794555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:120184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.375 [2024-11-20 11:33:45.794574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:22.375 [2024-11-20 11:33:45.794601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:120192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.375 [2024-11-20 11:33:45.794637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:22.375 [2024-11-20 11:33:45.794667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:120200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.375 [2024-11-20 11:33:45.794687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:22.375 [2024-11-20 11:33:45.794714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:120208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.375 [2024-11-20 11:33:45.794733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:22.375 [2024-11-20 11:33:45.794760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:120216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.375 [2024-11-20 11:33:45.794779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:22.375 [2024-11-20 11:33:45.794806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:120224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.375 [2024-11-20 11:33:45.794826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:22.375 [2024-11-20 11:33:45.794853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:120232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.375 [2024-11-20 11:33:45.794872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:22.375 [2024-11-20 11:33:45.794899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:120240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.375 [2024-11-20 11:33:45.794918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:22.375 [2024-11-20 11:33:45.794945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:120248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.375 [2024-11-20 11:33:45.794965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:22.375 [2024-11-20 11:33:45.795003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:120256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.375 [2024-11-20 11:33:45.795024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:22.375 [2024-11-20 11:33:45.795050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:120264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.375 [2024-11-20 11:33:45.795070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:22.375 [2024-11-20 11:33:45.795097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:120272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.375 [2024-11-20 11:33:45.795116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:22.375 [2024-11-20 11:33:45.795143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:120280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.375 [2024-11-20 11:33:45.795162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:22.375 [2024-11-20 11:33:45.795189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:120288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.375 [2024-11-20 11:33:45.795208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:22.375 [2024-11-20 11:33:45.795234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:120296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.375 [2024-11-20 11:33:45.795254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:22.375 [2024-11-20 11:33:45.795290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:120304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.375 [2024-11-20 11:33:45.795309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:22.375 [2024-11-20 11:33:45.795336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:120312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.375 [2024-11-20 11:33:45.795356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:22.375 [2024-11-20 11:33:45.795382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:120320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.375 [2024-11-20 11:33:45.795402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:22.375 [2024-11-20 11:33:45.795428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:120328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.375 [2024-11-20 11:33:45.795448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:22.375 [2024-11-20 11:33:45.795474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:120336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.375 [2024-11-20 11:33:45.795494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:22.375 [2024-11-20 11:33:45.795520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:120344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.375 [2024-11-20 11:33:45.795539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:22.375 [2024-11-20 11:33:45.795575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:120352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.375 [2024-11-20 11:33:45.795595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:22.375 [2024-11-20 11:33:45.795639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:120360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.375 [2024-11-20 11:33:45.795662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:22.375 [2024-11-20 11:33:45.795689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:120368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.375 [2024-11-20 11:33:45.795709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:22.375 [2024-11-20 11:33:45.795736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:120376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.375 [2024-11-20 11:33:45.795756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:22.375 [2024-11-20 11:33:45.795784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:120384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.375 [2024-11-20 11:33:45.795804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:22.375 [2024-11-20 11:33:45.796654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:120392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.375 [2024-11-20 11:33:45.796690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:22.375 [2024-11-20 11:33:45.796724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:120400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.375 [2024-11-20 11:33:45.796745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:22.375 [2024-11-20 11:33:45.796773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.375 [2024-11-20 11:33:45.796793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:22.375 [2024-11-20 11:33:45.796820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:120416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.375 [2024-11-20 11:33:45.796840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:22.375 [2024-11-20 11:33:45.796867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:120424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.375 [2024-11-20 11:33:45.796887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:22.375 [2024-11-20 11:33:45.796914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:120432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.375 [2024-11-20 11:33:45.796933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:22.376 [2024-11-20 11:33:45.796960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:120440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.376 [2024-11-20 11:33:45.796979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:22.376 [2024-11-20 11:33:45.797006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:119608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.376 [2024-11-20 11:33:45.797041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:22.376 [2024-11-20 11:33:45.797070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:119616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.376 [2024-11-20 11:33:45.797089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:22.376 [2024-11-20 11:33:45.797116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:119624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.376 [2024-11-20 11:33:45.797135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:22.376 [2024-11-20 11:33:45.797162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:119632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.376 [2024-11-20 11:33:45.797181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:22.376 [2024-11-20 11:33:45.797208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:119640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.376 [2024-11-20 11:33:45.797227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:22.376 [2024-11-20 11:33:45.797253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:119648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.376 [2024-11-20 11:33:45.797272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:22.376 [2024-11-20 11:33:45.797299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:119656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.376 [2024-11-20 11:33:45.797318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:22.376 [2024-11-20 11:33:45.797345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:119664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.376 [2024-11-20 11:33:45.797364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:22.376 [2024-11-20 11:33:45.797391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:119672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.376 [2024-11-20 11:33:45.797411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:22.376 [2024-11-20 11:33:45.797438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:119680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.376 [2024-11-20 11:33:45.797464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:22.376 [2024-11-20 11:33:45.797504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:119688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.376 [2024-11-20 11:33:45.797525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:22.376 [2024-11-20 11:33:45.797552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:119696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.376 [2024-11-20 11:33:45.797579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:22.376 [2024-11-20 11:33:45.797607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:119704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.376 [2024-11-20 11:33:45.797678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:22.376 [2024-11-20 11:33:45.797709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:119712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.376 [2024-11-20 11:33:45.797729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:22.376 [2024-11-20 11:33:45.797756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:119720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.376 [2024-11-20 11:33:45.797782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:22.376 [2024-11-20 11:33:45.797820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:119728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.376 [2024-11-20 11:33:45.797840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:22.376 [2024-11-20 11:33:45.797867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:119736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.376 [2024-11-20 11:33:45.797886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:22.376 [2024-11-20 11:33:45.797913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:119744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.376 [2024-11-20 11:33:45.797932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:22.376 [2024-11-20 11:33:45.797958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:120448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.376 [2024-11-20 11:33:45.797977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:22.376 [2024-11-20 11:33:45.798004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:120456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.376 [2024-11-20 11:33:45.798023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:22.376 [2024-11-20 11:33:45.798050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:120464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.376 [2024-11-20 11:33:45.798069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:22.376 [2024-11-20 11:33:45.798096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:120472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.376 [2024-11-20 11:33:45.798115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:22.376 [2024-11-20 11:33:45.798142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:120480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.376 [2024-11-20 11:33:45.798161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:22.376 [2024-11-20 11:33:45.798188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:120488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.376 [2024-11-20 11:33:45.798207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:22.376 [2024-11-20 11:33:45.798234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:120496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.376 [2024-11-20 11:33:45.798263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:22.376 [2024-11-20 11:33:45.798292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:120504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.376 [2024-11-20 11:33:45.798311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:22.376 [2024-11-20 11:33:45.798338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:120512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.376 [2024-11-20 11:33:45.798358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:22.376 [2024-11-20 11:33:45.798385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:120520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.376 [2024-11-20 11:33:45.798404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:22.376 [2024-11-20 11:33:45.798431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:120528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.376 [2024-11-20 11:33:45.798450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:22.376 [2024-11-20 11:33:45.798477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:120536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.376 [2024-11-20 11:33:45.798496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:22.376 [2024-11-20 11:33:45.798523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:119752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.376 [2024-11-20 11:33:45.798542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:22.376 [2024-11-20 11:33:45.798569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:119760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.376 [2024-11-20 11:33:45.798588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:22.376 [2024-11-20 11:33:45.798628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:119768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.376 [2024-11-20 11:33:45.798651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:22.376 [2024-11-20 11:33:45.798679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:119776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.376 [2024-11-20 11:33:45.798699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:22.376 [2024-11-20 11:33:45.798725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:119784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.377 [2024-11-20 11:33:45.798744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:22.377 [2024-11-20 11:33:45.798771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:119792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.377 [2024-11-20 11:33:45.798790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:22.377 [2024-11-20 11:33:45.798817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:119800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.377 [2024-11-20 11:33:45.798836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:22.377 [2024-11-20 11:33:45.798873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:119808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.377 [2024-11-20 11:33:45.798893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:22.377 [2024-11-20 11:33:45.798919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:119816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.377 [2024-11-20 11:33:45.798938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:22.377 [2024-11-20 11:33:45.798965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:119824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.377 [2024-11-20 11:33:45.798985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:22.377 [2024-11-20 11:33:45.799015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:119832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.377 [2024-11-20 11:33:45.799042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:22.377 [2024-11-20 11:33:45.799070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:119840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.377 [2024-11-20 11:33:45.799096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:22.377 [2024-11-20 11:33:45.799124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:119848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.377 [2024-11-20 11:33:45.799144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:22.377 [2024-11-20 11:33:45.799171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:119856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.377 [2024-11-20 11:33:45.799190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:22.377 [2024-11-20 11:33:45.799216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:119864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.377 [2024-11-20 11:33:45.799236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:22.377 [2024-11-20 11:33:45.799262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:119872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.377 [2024-11-20 11:33:45.799281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:22.377 [2024-11-20 11:33:45.799311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:119880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.377 [2024-11-20 11:33:45.799338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:22.377 [2024-11-20 11:33:45.799365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:119888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.377 [2024-11-20 11:33:45.799385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:22.377 [2024-11-20 11:33:45.799412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:119896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.377 [2024-11-20 11:33:45.799431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:22.377 [2024-11-20 11:33:45.799477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:119904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.377 [2024-11-20 11:33:45.799497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:22.377 [2024-11-20 11:33:45.799524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:119912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.377 [2024-11-20 11:33:45.799543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:22.377 [2024-11-20 11:33:45.799570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:119920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.377 [2024-11-20 11:33:45.799589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:22.377 [2024-11-20 11:33:45.799630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:119928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.377 [2024-11-20 11:33:45.799653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:22.377 [2024-11-20 11:33:45.799680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:119936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.377 [2024-11-20 11:33:45.799700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:22.377 [2024-11-20 11:33:45.799727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:119944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.377 [2024-11-20 11:33:45.799746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:22.377 [2024-11-20 11:33:45.799772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:119952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.377 [2024-11-20 11:33:45.799791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:22.377 [2024-11-20 11:33:45.799818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:119960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.377 [2024-11-20 11:33:45.799837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:22.377 [2024-11-20 11:33:45.799864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:119968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.377 [2024-11-20 11:33:45.799883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:22.377 [2024-11-20 11:33:45.799909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:120544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.377 [2024-11-20 11:33:45.799928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:22.377 [2024-11-20 11:33:45.799955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:120552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.377 [2024-11-20 11:33:45.799974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.377 [2024-11-20 11:33:45.800001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:120560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.377 [2024-11-20 11:33:45.800020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:22.377 [2024-11-20 11:33:45.800056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:120568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.378 [2024-11-20 11:33:45.800077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:22.378 [2024-11-20 11:33:45.800104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:120576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.378 [2024-11-20 11:33:45.800123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:22.378 [2024-11-20 11:33:45.800150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:120584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.378 [2024-11-20 11:33:45.800170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:22.378 [2024-11-20 11:33:45.800197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:119568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.378 [2024-11-20 11:33:45.800216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:22.378 [2024-11-20 11:33:45.800243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:119576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.378 [2024-11-20 11:33:45.800262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:22.378 [2024-11-20 11:33:45.800290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:119584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.378 [2024-11-20 11:33:45.800310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:22.378 [2024-11-20 11:33:45.801301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:119592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.378 [2024-11-20 11:33:45.801337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:22.378 [2024-11-20 11:33:45.801371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:119600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.378 [2024-11-20 11:33:45.801394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:22.378 [2024-11-20 11:33:45.801422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:119976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.378 [2024-11-20 11:33:45.801442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:22.378 [2024-11-20 11:33:45.801469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:119984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.378 [2024-11-20 11:33:45.801488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:22.378 [2024-11-20 11:33:45.801515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:119992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.378 [2024-11-20 11:33:45.801535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:22.378 [2024-11-20 11:33:45.801562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:120000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.378 [2024-11-20 11:33:45.801582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:22.378 [2024-11-20 11:33:45.801609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:120008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.378 [2024-11-20 11:33:45.801679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:22.378 [2024-11-20 11:33:45.801710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:120016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.378 [2024-11-20 11:33:45.801730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:22.378 [2024-11-20 11:33:45.801767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:120024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.378 [2024-11-20 11:33:45.801787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:22.378 [2024-11-20 11:33:45.801814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:120032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.378 [2024-11-20 11:33:45.801833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:22.378 [2024-11-20 11:33:45.801860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:120040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.378 [2024-11-20 11:33:45.801879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:22.378 [2024-11-20 11:33:45.801906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:120048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.378 [2024-11-20 11:33:45.801925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:22.378 [2024-11-20 11:33:45.801952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:120056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.378 [2024-11-20 11:33:45.801971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:22.378 [2024-11-20 11:33:45.801998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:120064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.378 [2024-11-20 11:33:45.802017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:22.378 [2024-11-20 11:33:45.802044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:120072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.378 [2024-11-20 11:33:45.802063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:22.378 [2024-11-20 11:33:45.802089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:120080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.378 [2024-11-20 11:33:45.802109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:22.378 [2024-11-20 11:33:45.802136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:120088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.378 [2024-11-20 11:33:45.802155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:22.378 [2024-11-20 11:33:45.802182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:120096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.378 [2024-11-20 11:33:45.802210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:22.378 [2024-11-20 11:33:45.802245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:120104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.378 [2024-11-20 11:33:45.802276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:22.378 [2024-11-20 11:33:45.802315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:120112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.378 [2024-11-20 11:33:45.802335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:22.378 [2024-11-20 11:33:45.802370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:120120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.378 [2024-11-20 11:33:45.802390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:22.378 [2024-11-20 11:33:45.802417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:120128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.378 [2024-11-20 11:33:45.802437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:22.378 [2024-11-20 11:33:45.802463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:120136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.378 [2024-11-20 11:33:45.802483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:22.378 [2024-11-20 11:33:45.802510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:120144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.378 [2024-11-20 11:33:45.802532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:22.378 [2024-11-20 11:33:45.802563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:120152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.378 [2024-11-20 11:33:45.802582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:22.378 [2024-11-20 11:33:45.802610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:120160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.378 [2024-11-20 11:33:45.802663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:22.378 [2024-11-20 11:33:45.802696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:120168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.379 [2024-11-20 11:33:45.802716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:22.379 [2024-11-20 11:33:45.802744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:120176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.379 [2024-11-20 11:33:45.802763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:22.379 [2024-11-20 11:33:45.802790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:120184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.379 [2024-11-20 11:33:45.802809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:22.379 [2024-11-20 11:33:45.802836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:120192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.379 [2024-11-20 11:33:45.802855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:22.379 [2024-11-20 11:33:45.802882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:120200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.379 [2024-11-20 11:33:45.802901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:22.379 [2024-11-20 11:33:45.802943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:120208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.379 [2024-11-20 11:33:45.802963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:22.379 [2024-11-20 11:33:45.802990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:120216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.379 [2024-11-20 11:33:45.803009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:22.379 [2024-11-20 11:33:45.803036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:120224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.379 [2024-11-20 11:33:45.803056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:22.379 [2024-11-20 11:33:45.803083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:120232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.379 [2024-11-20 11:33:45.803103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:22.379 [2024-11-20 11:33:45.803129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:120240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.379 [2024-11-20 11:33:45.803149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:22.379 [2024-11-20 11:33:45.803176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:120248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.379 [2024-11-20 11:33:45.803195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:22.379 [2024-11-20 11:33:45.803221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:120256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.379 [2024-11-20 11:33:45.803240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:22.379 [2024-11-20 11:33:45.803267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:120264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.379 [2024-11-20 11:33:45.803287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:22.379 [2024-11-20 11:33:45.803313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:120272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.379 [2024-11-20 11:33:45.803332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:22.379 [2024-11-20 11:33:45.803359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:120280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.379 [2024-11-20 11:33:45.803378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:22.379 [2024-11-20 11:33:45.803405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:120288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.379 [2024-11-20 11:33:45.803424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:22.379 [2024-11-20 11:33:45.803451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:120296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.379 [2024-11-20 11:33:45.803470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:22.379 [2024-11-20 11:33:45.803521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:120304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.379 [2024-11-20 11:33:45.803542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:22.379 [2024-11-20 11:33:45.803569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:120312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.379 [2024-11-20 11:33:45.803588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:22.379 [2024-11-20 11:33:45.803632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:120320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.379 [2024-11-20 11:33:45.803655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:22.379 [2024-11-20 11:33:45.803684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:120328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.379 [2024-11-20 11:33:45.803703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:22.379 [2024-11-20 11:33:45.803730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:120336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.379 [2024-11-20 11:33:45.803749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:22.379 [2024-11-20 11:33:45.803775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:120344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.379 [2024-11-20 11:33:45.803796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:22.379 [2024-11-20 11:33:45.803839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:120352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.379 [2024-11-20 11:33:45.803870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:22.379 [2024-11-20 11:33:45.803902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:120360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.379 [2024-11-20 11:33:45.803923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:22.379 [2024-11-20 11:33:45.803956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:120368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.379 [2024-11-20 11:33:45.803986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:22.379 [2024-11-20 11:33:45.804021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:120376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.379 [2024-11-20 11:33:45.804051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:22.379 [2024-11-20 11:33:45.804885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:120384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.379 [2024-11-20 11:33:45.804920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:22.379 [2024-11-20 11:33:45.804954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:120392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.379 [2024-11-20 11:33:45.804976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:22.379 [2024-11-20 11:33:45.805003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:120400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.379 [2024-11-20 11:33:45.805049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:22.380 [2024-11-20 11:33:45.805079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:120408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.380 [2024-11-20 11:33:45.805100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:22.380 [2024-11-20 11:33:45.805127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:120416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.380 [2024-11-20 11:33:45.805146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:22.380 [2024-11-20 11:33:45.805178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:120424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.380 [2024-11-20 11:33:45.805196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:22.380 [2024-11-20 11:33:45.805223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:120432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.380 [2024-11-20 11:33:45.805242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:22.380 [2024-11-20 11:33:45.805270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:120440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.380 [2024-11-20 11:33:45.805289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:22.380 [2024-11-20 11:33:45.805316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:119608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.380 [2024-11-20 11:33:45.805335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:22.380 [2024-11-20 11:33:45.805362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:119616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.380 [2024-11-20 11:33:45.805381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:22.380 [2024-11-20 11:33:45.805407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:119624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.380 [2024-11-20 11:33:45.805426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:22.380 [2024-11-20 11:33:45.805452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:119632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.380 [2024-11-20 11:33:45.805472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:22.380 [2024-11-20 11:33:45.805509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:119640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.380 [2024-11-20 11:33:45.805538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:22.380 [2024-11-20 11:33:45.805567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:119648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.380 [2024-11-20 11:33:45.805586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:22.380 [2024-11-20 11:33:45.805642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:119656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.380 [2024-11-20 11:33:45.805704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:22.380 [2024-11-20 11:33:45.805748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:119664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.380 [2024-11-20 11:33:45.805768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:22.380 [2024-11-20 11:33:45.805796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:119672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.380 [2024-11-20 11:33:45.805821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:22.380 [2024-11-20 11:33:45.805850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:119680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.380 [2024-11-20 11:33:45.805869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:22.380 [2024-11-20 11:33:45.805896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:119688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.380 [2024-11-20 11:33:45.805915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:22.380 [2024-11-20 11:33:45.805942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:119696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.380 [2024-11-20 11:33:45.805961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:22.380 [2024-11-20 11:33:45.805988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:119704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.380 [2024-11-20 11:33:45.806007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:22.380 [2024-11-20 11:33:45.806034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:119712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.380 [2024-11-20 11:33:45.806053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:22.380 [2024-11-20 11:33:45.806079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:119720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.380 [2024-11-20 11:33:45.806098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:22.380 [2024-11-20 11:33:45.806125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:119728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.380 [2024-11-20 11:33:45.806145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:22.380 [2024-11-20 11:33:45.806171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:119736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.380 [2024-11-20 11:33:45.806190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:22.380 [2024-11-20 11:33:45.806217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:119744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.380 [2024-11-20 11:33:45.806235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:22.380 [2024-11-20 11:33:45.806262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:120448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.380 [2024-11-20 11:33:45.806297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:22.380 [2024-11-20 11:33:45.806327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:120456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.380 [2024-11-20 11:33:45.806347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:22.380 [2024-11-20 11:33:45.806374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:120464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.380 [2024-11-20 11:33:45.806394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:22.380 [2024-11-20 11:33:45.806421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:120472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.380 [2024-11-20 11:33:45.806440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:22.380 [2024-11-20 11:33:45.806467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:120480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.380 [2024-11-20 11:33:45.806486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:22.380 [2024-11-20 11:33:45.806513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:120488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.380 [2024-11-20 11:33:45.806531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:22.380 [2024-11-20 11:33:45.806558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:120496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.380 [2024-11-20 11:33:45.806577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:22.380 [2024-11-20 11:33:45.806604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:120504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.380 [2024-11-20 11:33:45.806640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:22.380 [2024-11-20 11:33:45.806669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:120512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.380 [2024-11-20 11:33:45.806689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:22.380 [2024-11-20 11:33:45.806715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:120520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.380 [2024-11-20 11:33:45.806735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:22.380 [2024-11-20 11:33:45.806762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:120528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.380 [2024-11-20 11:33:45.806781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:22.380 [2024-11-20 11:33:45.806817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:120536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.380 [2024-11-20 11:33:45.806846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:22.381 [2024-11-20 11:33:45.806875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:119752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.381 [2024-11-20 11:33:45.806894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:22.381 [2024-11-20 11:33:45.806933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:119760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.381 [2024-11-20 11:33:45.806970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:22.381 [2024-11-20 11:33:45.806998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:119768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.381 [2024-11-20 11:33:45.807017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:22.381 [2024-11-20 11:33:45.807044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:119776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.381 [2024-11-20 11:33:45.807068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:22.381 [2024-11-20 11:33:45.807104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:119784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.381 [2024-11-20 11:33:45.807124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:22.381 [2024-11-20 11:33:45.807151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:119792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.381 [2024-11-20 11:33:45.807170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:22.381 [2024-11-20 11:33:45.807197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:119800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.381 [2024-11-20 11:33:45.807216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:22.381 [2024-11-20 11:33:45.807243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:119808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.381 [2024-11-20 11:33:45.807262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:22.381 [2024-11-20 11:33:45.807288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:119816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.381 [2024-11-20 11:33:45.807307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:22.381 [2024-11-20 11:33:45.807334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:119824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.381 [2024-11-20 11:33:45.807352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:22.381 [2024-11-20 11:33:45.807379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:119832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.381 [2024-11-20 11:33:45.807398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:22.381 [2024-11-20 11:33:45.807425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:119840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.381 [2024-11-20 11:33:45.807444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:22.381 [2024-11-20 11:33:45.807470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:119848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.381 [2024-11-20 11:33:45.807489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:22.381 [2024-11-20 11:33:45.807526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:119856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.381 [2024-11-20 11:33:45.807546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:22.381 [2024-11-20 11:33:45.807573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:119864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.381 [2024-11-20 11:33:45.807592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:22.381 [2024-11-20 11:33:45.807634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:119872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.381 [2024-11-20 11:33:45.807657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:22.381 [2024-11-20 11:33:45.807685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:119880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.381 [2024-11-20 11:33:45.807704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:22.381 [2024-11-20 11:33:45.807731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:119888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.381 [2024-11-20 11:33:45.807750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:22.381 [2024-11-20 11:33:45.807777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:119896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.381 [2024-11-20 11:33:45.807796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:22.381 [2024-11-20 11:33:45.807823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:119904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.381 [2024-11-20 11:33:45.807842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:22.381 [2024-11-20 11:33:45.807868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:119912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.381 [2024-11-20 11:33:45.807887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:22.381 [2024-11-20 11:33:45.807914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:119920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.381 [2024-11-20 11:33:45.807933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:22.381 [2024-11-20 11:33:45.807960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:119928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.381 [2024-11-20 11:33:45.807979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:22.381 [2024-11-20 11:33:45.808006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:119936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.381 [2024-11-20 11:33:45.808025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:22.381 [2024-11-20 11:33:45.808052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:119944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.381 [2024-11-20 11:33:45.808077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:22.381 [2024-11-20 11:33:45.808131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:119952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.381 [2024-11-20 11:33:45.808154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:22.381 [2024-11-20 11:33:45.808181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:119960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.381 [2024-11-20 11:33:45.808200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:22.381 [2024-11-20 11:33:45.808227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:119968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.381 [2024-11-20 11:33:45.808249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:22.381 [2024-11-20 11:33:45.808281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:120544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.381 [2024-11-20 11:33:45.808301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:22.381 [2024-11-20 11:33:45.808328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:120552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.381 [2024-11-20 11:33:45.808347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.381 [2024-11-20 11:33:45.808378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:120560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.381 [2024-11-20 11:33:45.808407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:22.382 [2024-11-20 11:33:45.808436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:120568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.382 [2024-11-20 11:33:45.808456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:22.382 [2024-11-20 11:33:45.808483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:120576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.382 [2024-11-20 11:33:45.808503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:22.382 [2024-11-20 11:33:45.808529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.382 [2024-11-20 11:33:45.808549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:22.382 [2024-11-20 11:33:45.808576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:119568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.382 [2024-11-20 11:33:45.808595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:22.382 [2024-11-20 11:33:45.808648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:119576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.382 [2024-11-20 11:33:45.808671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:22.382 [2024-11-20 11:33:45.809672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:119584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.382 [2024-11-20 11:33:45.809719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:22.382 [2024-11-20 11:33:45.809757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:119592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.382 [2024-11-20 11:33:45.809802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:22.382 [2024-11-20 11:33:45.809832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:119600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.382 [2024-11-20 11:33:45.809854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:22.382 [2024-11-20 11:33:45.809881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:119976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.382 [2024-11-20 11:33:45.809900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:22.382 [2024-11-20 11:33:45.809927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:119984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.382 [2024-11-20 11:33:45.809949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:22.382 [2024-11-20 11:33:45.809977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:119992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.382 [2024-11-20 11:33:45.809996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:22.382 [2024-11-20 11:33:45.810023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:120000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.382 [2024-11-20 11:33:45.810042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:22.382 [2024-11-20 11:33:45.810069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:120008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.382 [2024-11-20 11:33:45.810088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:22.382 [2024-11-20 11:33:45.810115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:120016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.382 [2024-11-20 11:33:45.810134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:22.382 [2024-11-20 11:33:45.810161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:120024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.382 [2024-11-20 11:33:45.810180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:22.382 [2024-11-20 11:33:45.810206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:120032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.382 [2024-11-20 11:33:45.810225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:22.382 [2024-11-20 11:33:45.810252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:120040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.382 [2024-11-20 11:33:45.810271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:22.382 [2024-11-20 11:33:45.810297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:120048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.382 [2024-11-20 11:33:45.810316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:22.382 [2024-11-20 11:33:45.810343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:120056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.382 [2024-11-20 11:33:45.810372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:22.382 [2024-11-20 11:33:45.810400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:120064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.382 [2024-11-20 11:33:45.810419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:22.382 [2024-11-20 11:33:45.810446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:120072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.382 [2024-11-20 11:33:45.810465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:22.382 [2024-11-20 11:33:45.810491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:120080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.382 [2024-11-20 11:33:45.810510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:22.382 [2024-11-20 11:33:45.810537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:120088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.382 [2024-11-20 11:33:45.810555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:22.382 [2024-11-20 11:33:45.810582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:120096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.382 [2024-11-20 11:33:45.810604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:22.382 [2024-11-20 11:33:45.810650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:120104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.382 [2024-11-20 11:33:45.810671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:22.382 [2024-11-20 11:33:45.810698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:120112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.382 [2024-11-20 11:33:45.810719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:22.382 [2024-11-20 11:33:45.810747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:120120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.382 [2024-11-20 11:33:45.810772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:22.382 [2024-11-20 11:33:45.810804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:120128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.382 [2024-11-20 11:33:45.810824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:22.383 [2024-11-20 11:33:45.810851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:120136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.383 [2024-11-20 11:33:45.810878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:22.383 [2024-11-20 11:33:45.810909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:120144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.383 [2024-11-20 11:33:45.810940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:22.383 [2024-11-20 11:33:45.810975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:120152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.383 [2024-11-20 11:33:45.810995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:22.383 [2024-11-20 11:33:45.811033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:120160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.383 [2024-11-20 11:33:45.811061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:22.383 [2024-11-20 11:33:45.811089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:120168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.383 [2024-11-20 11:33:45.811109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:22.383 [2024-11-20 11:33:45.811136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:120176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.383 [2024-11-20 11:33:45.811155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:22.383 [2024-11-20 11:33:45.811182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:120184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.383 [2024-11-20 11:33:45.811201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:22.383 [2024-11-20 11:33:45.811228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:120192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.383 [2024-11-20 11:33:45.811247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:22.383 [2024-11-20 11:33:45.811274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:120200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.383 [2024-11-20 11:33:45.811293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:22.383 [2024-11-20 11:33:45.811320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:120208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.383 [2024-11-20 11:33:45.811339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:22.383 [2024-11-20 11:33:45.811365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:120216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.383 [2024-11-20 11:33:45.811385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:22.383 [2024-11-20 11:33:45.811412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:120224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.383 [2024-11-20 11:33:45.811432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:22.383 [2024-11-20 11:33:45.811459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:120232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.383 [2024-11-20 11:33:45.811488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:22.383 [2024-11-20 11:33:45.811515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:120240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.383 [2024-11-20 11:33:45.811537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:22.383 [2024-11-20 11:33:45.811565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:120248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.383 [2024-11-20 11:33:45.811584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:22.383 [2024-11-20 11:33:45.811636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:120256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.383 [2024-11-20 11:33:45.811659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:22.383 [2024-11-20 11:33:45.811687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:120264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.383 [2024-11-20 11:33:45.811707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:22.383 [2024-11-20 11:33:45.811733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:120272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.383 [2024-11-20 11:33:45.811753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:22.383 [2024-11-20 11:33:45.811779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:120280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.383 [2024-11-20 11:33:45.811799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:22.383 [2024-11-20 11:33:45.811825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:120288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.383 [2024-11-20 11:33:45.811845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:22.383 [2024-11-20 11:33:45.811871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:120296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.383 [2024-11-20 11:33:45.811891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:22.383 [2024-11-20 11:33:45.811918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:120304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.383 [2024-11-20 11:33:45.811937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:22.383 [2024-11-20 11:33:45.811963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:120312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.383 [2024-11-20 11:33:45.811982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:22.383 [2024-11-20 11:33:45.812009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:120320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.383 [2024-11-20 11:33:45.812028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:22.383 [2024-11-20 11:33:45.812055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:120328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.383 [2024-11-20 11:33:45.812074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:22.383 [2024-11-20 11:33:45.812103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:120336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.383 [2024-11-20 11:33:45.812128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:22.383 [2024-11-20 11:33:45.812159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:120344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.383 [2024-11-20 11:33:45.812186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:22.383 [2024-11-20 11:33:45.812214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:120352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.383 [2024-11-20 11:33:45.812253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:22.383 [2024-11-20 11:33:45.812294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:120360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.383 [2024-11-20 11:33:45.812316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:22.383 [2024-11-20 11:33:45.812346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:120368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.383 [2024-11-20 11:33:45.812379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:22.383 [2024-11-20 11:33:45.813174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:120376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.383 [2024-11-20 11:33:45.813207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:22.383 [2024-11-20 11:33:45.813240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:120384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.383 [2024-11-20 11:33:45.813262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:22.383 [2024-11-20 11:33:45.813296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.383 [2024-11-20 11:33:45.813322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:22.383 [2024-11-20 11:33:45.813355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:120400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.384 [2024-11-20 11:33:45.813390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:22.384 [2024-11-20 11:33:45.813436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:120408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.384 [2024-11-20 11:33:45.813471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:22.384 [2024-11-20 11:33:45.813515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:120416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.384 [2024-11-20 11:33:45.813549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:22.384 [2024-11-20 11:33:45.813593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:120424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.384 [2024-11-20 11:33:45.813675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:22.384 [2024-11-20 11:33:45.813724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:120432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.384 [2024-11-20 11:33:45.813759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:22.384 [2024-11-20 11:33:45.813804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:120440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.384 [2024-11-20 11:33:45.813839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:22.384 [2024-11-20 11:33:45.813885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:119608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.384 [2024-11-20 11:33:45.813942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:22.384 [2024-11-20 11:33:45.813990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:119616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.384 [2024-11-20 11:33:45.814025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:22.384 [2024-11-20 11:33:45.814075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:119624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.384 [2024-11-20 11:33:45.814112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:22.384 [2024-11-20 11:33:45.814163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:119632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.384 [2024-11-20 11:33:45.814198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:22.384 [2024-11-20 11:33:45.814244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:119640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.384 [2024-11-20 11:33:45.814278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:22.384 [2024-11-20 11:33:45.814324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:119648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.384 [2024-11-20 11:33:45.814359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:22.384 [2024-11-20 11:33:45.814402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:119656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.384 [2024-11-20 11:33:45.814432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:22.384 [2024-11-20 11:33:45.814472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:119664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.384 [2024-11-20 11:33:45.814494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:22.384 [2024-11-20 11:33:45.814521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:119672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.384 [2024-11-20 11:33:45.814541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:22.384 [2024-11-20 11:33:45.814568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:119680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.384 [2024-11-20 11:33:45.814587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:22.384 [2024-11-20 11:33:45.814630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:119688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.384 [2024-11-20 11:33:45.814653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:22.384 [2024-11-20 11:33:45.814680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:119696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.384 [2024-11-20 11:33:45.814700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:22.384 [2024-11-20 11:33:45.814726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:119704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.384 [2024-11-20 11:33:45.814758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:22.384 [2024-11-20 11:33:45.814787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:119712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.384 [2024-11-20 11:33:45.814806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:22.384 [2024-11-20 11:33:45.814833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:119720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.384 [2024-11-20 11:33:45.814853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:22.384 [2024-11-20 11:33:45.814879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:119728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.384 [2024-11-20 11:33:45.814898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:22.384 [2024-11-20 11:33:45.814925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:119736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.384 [2024-11-20 11:33:45.814944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:22.384 [2024-11-20 11:33:45.814971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:119744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.384 [2024-11-20 11:33:45.814991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:22.384 [2024-11-20 11:33:45.815017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:120448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.384 [2024-11-20 11:33:45.815036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:22.384 [2024-11-20 11:33:45.815068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:120456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.384 [2024-11-20 11:33:45.815104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:22.384 [2024-11-20 11:33:45.815151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:120464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.384 [2024-11-20 11:33:45.815187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:22.384 [2024-11-20 11:33:45.815232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:120472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.384 [2024-11-20 11:33:45.815267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:22.384 [2024-11-20 11:33:45.815313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:120480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.384 [2024-11-20 11:33:45.815348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:22.384 [2024-11-20 11:33:45.815393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:120488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.384 [2024-11-20 11:33:45.815427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:22.384 [2024-11-20 11:33:45.815471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:120496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.384 [2024-11-20 11:33:45.815505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:22.384 [2024-11-20 11:33:45.815574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:120504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.384 [2024-11-20 11:33:45.815610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:22.384 [2024-11-20 11:33:45.815680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:120512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.384 [2024-11-20 11:33:45.815721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:22.384 [2024-11-20 11:33:45.815769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:120520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.384 [2024-11-20 11:33:45.815805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:22.384 [2024-11-20 11:33:45.815852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:120528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.384 [2024-11-20 11:33:45.815887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:22.385 [2024-11-20 11:33:45.815935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:120536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.385 [2024-11-20 11:33:45.815970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:22.385 [2024-11-20 11:33:45.816012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:119752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.385 [2024-11-20 11:33:45.816042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:22.385 [2024-11-20 11:33:45.816080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:119760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.385 [2024-11-20 11:33:45.816101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:22.385 [2024-11-20 11:33:45.816128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:119768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.385 [2024-11-20 11:33:45.816148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:22.385 [2024-11-20 11:33:45.816178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:119776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.385 [2024-11-20 11:33:45.816197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:22.385 [2024-11-20 11:33:45.816224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:119784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.385 [2024-11-20 11:33:45.816243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:22.385 [2024-11-20 11:33:45.816270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:119792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.385 [2024-11-20 11:33:45.816289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:22.385 [2024-11-20 11:33:45.816316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:119800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.385 [2024-11-20 11:33:45.816334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:22.385 [2024-11-20 11:33:45.816373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:119808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.385 [2024-11-20 11:33:45.816393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:22.385 [2024-11-20 11:33:45.816420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:119816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.385 [2024-11-20 11:33:45.816439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:22.385 [2024-11-20 11:33:45.816466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:119824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.385 [2024-11-20 11:33:45.816485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:22.385 [2024-11-20 11:33:45.816512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:119832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.385 [2024-11-20 11:33:45.816531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:22.385 [2024-11-20 11:33:45.816558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:119840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.385 [2024-11-20 11:33:45.816577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:22.385 [2024-11-20 11:33:45.816643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:119848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.385 [2024-11-20 11:33:45.816682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:22.385 [2024-11-20 11:33:45.816731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:119856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.385 [2024-11-20 11:33:45.816767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:22.385 [2024-11-20 11:33:45.816814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:119864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.385 [2024-11-20 11:33:45.816849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:22.385 [2024-11-20 11:33:45.816895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:119872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.385 [2024-11-20 11:33:45.816929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:22.385 [2024-11-20 11:33:45.816975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:119880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.385 [2024-11-20 11:33:45.817010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:22.385 [2024-11-20 11:33:45.817055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:119888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.385 [2024-11-20 11:33:45.817089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:22.385 [2024-11-20 11:33:45.817135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:119896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.385 [2024-11-20 11:33:45.817171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:22.385 [2024-11-20 11:33:45.817220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:119904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.385 [2024-11-20 11:33:45.817278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:22.385 [2024-11-20 11:33:45.817327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:119912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.385 [2024-11-20 11:33:45.817363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:22.385 [2024-11-20 11:33:45.817411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:119920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.385 [2024-11-20 11:33:45.817447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:22.385 [2024-11-20 11:33:45.817493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:119928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.385 [2024-11-20 11:33:45.817525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:22.385 [2024-11-20 11:33:45.817565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:119936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.385 [2024-11-20 11:33:45.817596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:22.385 [2024-11-20 11:33:45.817654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:119944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.385 [2024-11-20 11:33:45.817686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:22.385 [2024-11-20 11:33:45.817713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:119952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.385 [2024-11-20 11:33:45.817732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:22.385 [2024-11-20 11:33:45.817759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:119960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.385 [2024-11-20 11:33:45.817778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:22.385 [2024-11-20 11:33:45.817805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:119968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.385 [2024-11-20 11:33:45.817824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:22.385 [2024-11-20 11:33:45.817851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:120544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.385 [2024-11-20 11:33:45.817870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:22.385 [2024-11-20 11:33:45.817897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:120552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.385 [2024-11-20 11:33:45.817916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.386 [2024-11-20 11:33:45.817943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:120560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.386 [2024-11-20 11:33:45.817962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:22.386 [2024-11-20 11:33:45.817988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:120568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.386 [2024-11-20 11:33:45.818020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:22.386 [2024-11-20 11:33:45.818048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:120576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.386 [2024-11-20 11:33:45.818067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:22.386 [2024-11-20 11:33:45.818094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:120584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.386 [2024-11-20 11:33:45.818117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:22.386 [2024-11-20 11:33:45.818162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:119568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.386 [2024-11-20 11:33:45.818199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:22.386 [2024-11-20 11:33:45.818734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:119576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.386 [2024-11-20 11:33:45.818791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:22.386 [2024-11-20 11:33:45.818890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:119584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.386 [2024-11-20 11:33:45.818936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:22.386 [2024-11-20 11:33:45.818993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:119592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.386 [2024-11-20 11:33:45.819029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:22.386 [2024-11-20 11:33:45.819078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:119600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.386 [2024-11-20 11:33:45.819112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:22.386 [2024-11-20 11:33:45.819149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:119976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.386 [2024-11-20 11:33:45.819169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:22.386 [2024-11-20 11:33:45.819201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:119984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.386 [2024-11-20 11:33:45.819221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:22.386 [2024-11-20 11:33:45.819253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:119992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.386 [2024-11-20 11:33:45.819273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:22.386 [2024-11-20 11:33:45.819305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:120000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.386 [2024-11-20 11:33:45.819324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:22.386 [2024-11-20 11:33:45.819364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:120008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.386 [2024-11-20 11:33:45.819408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:22.386 [2024-11-20 11:33:45.819443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:120016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.386 [2024-11-20 11:33:45.819464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:22.386 [2024-11-20 11:33:45.819497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:120024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.386 [2024-11-20 11:33:45.819517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:22.386 [2024-11-20 11:33:45.819549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:120032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.386 [2024-11-20 11:33:45.819569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:22.386 [2024-11-20 11:33:45.819601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:120040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.386 [2024-11-20 11:33:45.819639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:22.386 [2024-11-20 11:33:45.819675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:120048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.386 [2024-11-20 11:33:45.819709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:22.386 [2024-11-20 11:33:45.819765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:120056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.386 [2024-11-20 11:33:45.819801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:22.386 [2024-11-20 11:33:45.819855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:120064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.386 [2024-11-20 11:33:45.819900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:22.386 [2024-11-20 11:33:45.819952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:120072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.386 [2024-11-20 11:33:45.819989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:22.386 [2024-11-20 11:33:45.820042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:120080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.386 [2024-11-20 11:33:45.820078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:22.386 [2024-11-20 11:33:45.820133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:120088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.386 [2024-11-20 11:33:45.820168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:22.386 [2024-11-20 11:33:45.820221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:120096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.386 [2024-11-20 11:33:45.820256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:22.386 [2024-11-20 11:33:45.820311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:120104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.386 [2024-11-20 11:33:45.820351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:22.386 [2024-11-20 11:33:45.820430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:120112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.386 [2024-11-20 11:33:45.820468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:22.386 [2024-11-20 11:33:45.820531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:120120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.386 [2024-11-20 11:33:45.820568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:22.386 [2024-11-20 11:33:45.820638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:120128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.386 [2024-11-20 11:33:45.820675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:22.386 [2024-11-20 11:33:45.820711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:120136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.386 [2024-11-20 11:33:45.820732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:22.386 [2024-11-20 11:33:45.820764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:120144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.386 [2024-11-20 11:33:45.820784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:22.387 [2024-11-20 11:33:45.820816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:120152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.387 [2024-11-20 11:33:45.820836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:22.387 [2024-11-20 11:33:45.820869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:120160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.387 [2024-11-20 11:33:45.820888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:22.387 [2024-11-20 11:33:45.820920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:120168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.387 [2024-11-20 11:33:45.820940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:22.387 [2024-11-20 11:33:45.820972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:120176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.387 [2024-11-20 11:33:45.820992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:22.387 [2024-11-20 11:33:45.821024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:120184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.387 [2024-11-20 11:33:45.821044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:22.387 [2024-11-20 11:33:45.821076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:120192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.387 [2024-11-20 11:33:45.821096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:22.387 [2024-11-20 11:33:45.821129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:120200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.387 [2024-11-20 11:33:45.821148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:22.387 [2024-11-20 11:33:45.821196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:120208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.387 [2024-11-20 11:33:45.821230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:22.387 [2024-11-20 11:33:45.821283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:120216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.387 [2024-11-20 11:33:45.821319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:22.387 [2024-11-20 11:33:45.821372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:120224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.387 [2024-11-20 11:33:45.821408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:22.387 [2024-11-20 11:33:45.821462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:120232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.387 [2024-11-20 11:33:45.821497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:22.387 [2024-11-20 11:33:45.821552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:120240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.387 [2024-11-20 11:33:45.821587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:22.387 [2024-11-20 11:33:45.821684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:120248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.387 [2024-11-20 11:33:45.821730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:22.387 [2024-11-20 11:33:45.821784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:120256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.387 [2024-11-20 11:33:45.821820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:22.387 [2024-11-20 11:33:45.821878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:120264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.387 [2024-11-20 11:33:45.821916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:22.387 [2024-11-20 11:33:45.821972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:120272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.387 [2024-11-20 11:33:45.822007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:22.387 [2024-11-20 11:33:45.822064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:120280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.387 [2024-11-20 11:33:45.822100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:22.387 [2024-11-20 11:33:45.822152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:120288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.387 [2024-11-20 11:33:45.822184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:22.387 [2024-11-20 11:33:45.822231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:120296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.387 [2024-11-20 11:33:45.822254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:22.387 [2024-11-20 11:33:45.822287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:120304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.387 [2024-11-20 11:33:45.822320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:22.387 [2024-11-20 11:33:45.822355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:120312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.387 [2024-11-20 11:33:45.822374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:22.387 [2024-11-20 11:33:45.822407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:120320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.387 [2024-11-20 11:33:45.822426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:22.387 [2024-11-20 11:33:45.822460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:120328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.387 [2024-11-20 11:33:45.822479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:22.387 [2024-11-20 11:33:45.822512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:120336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.387 [2024-11-20 11:33:45.822531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:22.387 [2024-11-20 11:33:45.822564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:120344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.387 [2024-11-20 11:33:45.822583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:22.387 [2024-11-20 11:33:45.822632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:120352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.387 [2024-11-20 11:33:45.822654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:22.387 [2024-11-20 11:33:45.822689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:120360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.387 [2024-11-20 11:33:45.822709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:22.387 [2024-11-20 11:33:45.822979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:120368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.387 [2024-11-20 11:33:45.823026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:22.387 8112.11 IOPS, 31.69 MiB/s [2024-11-20T11:34:07.976Z] 7685.16 IOPS, 30.02 MiB/s [2024-11-20T11:34:07.976Z] 7300.90 IOPS, 28.52 MiB/s [2024-11-20T11:34:07.976Z] 6953.24 IOPS, 27.16 MiB/s [2024-11-20T11:34:07.976Z] 6789.86 IOPS, 26.52 MiB/s [2024-11-20T11:34:07.976Z] 6879.22 IOPS, 26.87 MiB/s [2024-11-20T11:34:07.976Z] 6954.67 IOPS, 27.17 MiB/s [2024-11-20T11:34:07.976Z] 7018.20 IOPS, 27.41 MiB/s [2024-11-20T11:34:07.976Z] 7207.23 IOPS, 28.15 MiB/s [2024-11-20T11:34:07.976Z] 7354.41 IOPS, 28.73 MiB/s [2024-11-20T11:34:07.976Z] 7505.39 IOPS, 29.32 MiB/s [2024-11-20T11:34:07.976Z] 7628.62 IOPS, 29.80 MiB/s [2024-11-20T11:34:07.976Z] 7654.33 IOPS, 29.90 MiB/s [2024-11-20T11:34:07.976Z] 7651.74 IOPS, 29.89 MiB/s [2024-11-20T11:34:07.976Z] 7659.25 IOPS, 29.92 MiB/s [2024-11-20T11:34:07.976Z] 7704.73 IOPS, 30.10 MiB/s [2024-11-20T11:34:07.976Z] 7829.59 IOPS, 30.58 MiB/s [2024-11-20T11:34:07.976Z] 7941.66 IOPS, 31.02 MiB/s [2024-11-20T11:34:07.976Z] 8051.92 IOPS, 31.45 MiB/s [2024-11-20T11:34:07.976Z] [2024-11-20 11:34:04.700756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:1240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.387 [2024-11-20 11:34:04.700825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:22.388 [2024-11-20 11:34:04.700886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:1272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.388 [2024-11-20 11:34:04.700935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:22.388 [2024-11-20 11:34:04.700961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:1304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.388 [2024-11-20 11:34:04.700977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:22.388 [2024-11-20 11:34:04.700999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.388 [2024-11-20 11:34:04.701015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:22.388 [2024-11-20 11:34:04.701037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:1368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.388 [2024-11-20 11:34:04.701052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:22.388 [2024-11-20 11:34:04.701075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:1984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.388 [2024-11-20 11:34:04.701090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:22.388 [2024-11-20 11:34:04.701111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:2000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.388 [2024-11-20 11:34:04.701127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:22.388 [2024-11-20 11:34:04.701152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:1400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.388 [2024-11-20 11:34:04.701168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:22.388 [2024-11-20 11:34:04.701189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:1432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.388 [2024-11-20 11:34:04.701204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:22.388 [2024-11-20 11:34:04.701226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:1464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.388 [2024-11-20 11:34:04.701241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:22.388 [2024-11-20 11:34:04.701263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:1496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.388 [2024-11-20 11:34:04.701278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:22.388 [2024-11-20 11:34:04.701300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:1520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.388 [2024-11-20 11:34:04.701315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:22.388 [2024-11-20 11:34:04.701337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:1552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.388 [2024-11-20 11:34:04.701352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:22.388 [2024-11-20 11:34:04.701374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:1584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.388 [2024-11-20 11:34:04.701389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:22.388 [2024-11-20 11:34:04.701421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:1616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.388 [2024-11-20 11:34:04.701438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:22.388 [2024-11-20 11:34:04.701460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:1648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.388 [2024-11-20 11:34:04.701475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:22.388 [2024-11-20 11:34:04.701497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:1680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.388 [2024-11-20 11:34:04.701513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:22.388 [2024-11-20 11:34:04.701535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.388 [2024-11-20 11:34:04.701551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:22.388 [2024-11-20 11:34:04.701574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:1456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.388 [2024-11-20 11:34:04.701589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:22.388 [2024-11-20 11:34:04.701634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:1488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.388 [2024-11-20 11:34:04.701671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:22.388 [2024-11-20 11:34:04.701698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.388 [2024-11-20 11:34:04.701714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:22.388 [2024-11-20 11:34:04.701742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:1560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.388 [2024-11-20 11:34:04.701757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:22.388 [2024-11-20 11:34:04.701779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:1592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.388 [2024-11-20 11:34:04.701794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:22.388 [2024-11-20 11:34:04.701816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:1624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.388 [2024-11-20 11:34:04.701832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:22.389 [2024-11-20 11:34:04.701854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:2016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.389 [2024-11-20 11:34:04.701869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:22.389 [2024-11-20 11:34:04.701891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.389 [2024-11-20 11:34:04.701907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:22.389 [2024-11-20 11:34:04.701940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.389 [2024-11-20 11:34:04.701957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:22.389 [2024-11-20 11:34:04.701980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:1672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.389 [2024-11-20 11:34:04.701996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:22.389 [2024-11-20 11:34:04.702975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:1712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.389 [2024-11-20 11:34:04.703008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:22.389 [2024-11-20 11:34:04.703038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.389 [2024-11-20 11:34:04.703056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:22.389 [2024-11-20 11:34:04.703080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.389 [2024-11-20 11:34:04.703097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:22.389 [2024-11-20 11:34:04.703119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:2096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.389 [2024-11-20 11:34:04.703135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:22.389 [2024-11-20 11:34:04.703157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.389 [2024-11-20 11:34:04.703172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:22.389 [2024-11-20 11:34:04.703194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.389 [2024-11-20 11:34:04.703210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:22.389 [2024-11-20 11:34:04.703232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:1736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.389 [2024-11-20 11:34:04.703249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:22.389 [2024-11-20 11:34:04.703271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:1768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.389 [2024-11-20 11:34:04.703287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:22.389 [2024-11-20 11:34:04.703309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.389 [2024-11-20 11:34:04.703325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:22.389 [2024-11-20 11:34:04.703347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:1832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.389 [2024-11-20 11:34:04.703362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:22.389 [2024-11-20 11:34:04.703384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.389 [2024-11-20 11:34:04.703414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:22.389 [2024-11-20 11:34:04.703438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:2144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.389 [2024-11-20 11:34:04.703455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:22.389 [2024-11-20 11:34:04.703477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.389 [2024-11-20 11:34:04.703493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:22.389 [2024-11-20 11:34:04.703515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.389 [2024-11-20 11:34:04.703530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:22.389 [2024-11-20 11:34:04.703552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:1728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.389 [2024-11-20 11:34:04.703567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:22.389 [2024-11-20 11:34:04.703589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.389 [2024-11-20 11:34:04.703606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:22.389 [2024-11-20 11:34:04.703661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:1792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.389 [2024-11-20 11:34:04.703679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:22.389 [2024-11-20 11:34:04.703701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:1824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.389 [2024-11-20 11:34:04.703718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:22.389 [2024-11-20 11:34:04.703740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:2200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.389 [2024-11-20 11:34:04.703755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:22.389 [2024-11-20 11:34:04.703777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:2216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.389 [2024-11-20 11:34:04.703792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:22.389 [2024-11-20 11:34:04.703815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.389 [2024-11-20 11:34:04.703830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:22.389 [2024-11-20 11:34:04.703852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:1880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.389 [2024-11-20 11:34:04.703868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:22.389 [2024-11-20 11:34:04.704243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:1840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.389 [2024-11-20 11:34:04.704283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:22.389 [2024-11-20 11:34:04.704313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:1872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.389 [2024-11-20 11:34:04.704331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:22.389 [2024-11-20 11:34:04.704353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.389 [2024-11-20 11:34:04.704369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:22.389 [2024-11-20 11:34:04.704391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:1944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.390 [2024-11-20 11:34:04.704407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:22.390 [2024-11-20 11:34:04.704428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:1968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.390 [2024-11-20 11:34:04.704444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:22.390 [2024-11-20 11:34:04.704465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.390 [2024-11-20 11:34:04.704481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:22.390 [2024-11-20 11:34:04.704503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:2256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.390 [2024-11-20 11:34:04.704519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:22.390 [2024-11-20 11:34:04.704541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.390 [2024-11-20 11:34:04.704557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:22.390 [2024-11-20 11:34:04.704579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:2288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.390 [2024-11-20 11:34:04.704595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:22.390 8108.38 IOPS, 31.67 MiB/s [2024-11-20T11:34:07.979Z] 8105.79 IOPS, 31.66 MiB/s [2024-11-20T11:34:07.979Z] 8105.38 IOPS, 31.66 MiB/s [2024-11-20T11:34:07.979Z] Received shutdown signal, test time was about 39.460925 seconds 00:21:22.390 00:21:22.390 Latency(us) 00:21:22.390 [2024-11-20T11:34:07.979Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:22.390 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:22.390 Verification LBA range: start 0x0 length 0x4000 00:21:22.390 Nvme0n1 : 39.46 8113.03 31.69 0.00 0.00 15743.97 217.83 4087539.90 00:21:22.390 [2024-11-20T11:34:07.979Z] =================================================================================================================== 00:21:22.390 [2024-11-20T11:34:07.979Z] Total : 8113.03 31.69 0.00 0.00 15743.97 217.83 4087539.90 00:21:22.390 11:34:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:22.648 11:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:21:22.648 11:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:22.648 11:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:21:22.648 11:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:22.648 11:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:21:22.648 11:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:22.648 11:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:21:22.648 11:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:22.648 11:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:22.648 rmmod nvme_tcp 00:21:22.648 rmmod nvme_fabrics 00:21:22.907 rmmod nvme_keyring 00:21:22.907 11:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:22.907 11:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:21:22.907 11:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:21:22.907 11:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 89518 ']' 00:21:22.907 11:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 89518 00:21:22.907 11:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 89518 ']' 00:21:22.907 11:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 89518 00:21:22.907 11:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:21:22.907 11:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:22.908 11:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89518 00:21:22.908 killing process with pid 89518 00:21:22.908 11:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:22.908 11:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:22.908 11:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89518' 00:21:22.908 11:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 89518 00:21:22.908 11:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 89518 00:21:22.908 11:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:22.908 11:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:22.908 11:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:22.908 11:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:21:22.908 11:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:22.908 11:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:21:22.908 11:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:21:22.908 11:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:22.908 11:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:22.908 11:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:22.908 11:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:22.908 11:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:22.908 11:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:23.167 11:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:23.167 11:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:23.167 11:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:23.167 11:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:23.167 11:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:23.167 11:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:23.167 11:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:23.167 11:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:23.167 11:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:23.167 11:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:23.167 11:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:23.167 11:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:23.167 11:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:23.167 11:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:21:23.167 00:21:23.167 real 0m44.825s 00:21:23.167 user 2m29.226s 00:21:23.167 sys 0m10.651s 00:21:23.167 11:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:23.167 11:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:23.167 ************************************ 00:21:23.167 END TEST nvmf_host_multipath_status 00:21:23.167 ************************************ 00:21:23.167 11:34:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:21:23.167 11:34:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:23.167 11:34:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:23.167 11:34:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.167 ************************************ 00:21:23.167 START TEST nvmf_discovery_remove_ifc 00:21:23.167 ************************************ 00:21:23.167 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:21:23.428 * Looking for test storage... 00:21:23.428 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:23.428 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:23.428 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:21:23.428 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:23.428 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:23.428 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:23.428 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:23.428 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:23.428 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:21:23.428 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:21:23.428 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:21:23.428 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:21:23.428 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:21:23.428 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:21:23.428 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:21:23.428 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:23.428 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:21:23.428 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:21:23.428 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:23.428 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:23.428 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:21:23.428 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:21:23.428 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:23.428 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:21:23.428 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:21:23.428 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:21:23.428 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:21:23.428 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:23.428 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:21:23.428 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:21:23.428 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:23.428 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:23.428 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:21:23.428 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:23.428 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:23.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:23.428 --rc genhtml_branch_coverage=1 00:21:23.428 --rc genhtml_function_coverage=1 00:21:23.428 --rc genhtml_legend=1 00:21:23.428 --rc geninfo_all_blocks=1 00:21:23.428 --rc geninfo_unexecuted_blocks=1 00:21:23.428 00:21:23.428 ' 00:21:23.428 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:23.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:23.428 --rc genhtml_branch_coverage=1 00:21:23.428 --rc genhtml_function_coverage=1 00:21:23.428 --rc genhtml_legend=1 00:21:23.428 --rc geninfo_all_blocks=1 00:21:23.428 --rc geninfo_unexecuted_blocks=1 00:21:23.428 00:21:23.428 ' 00:21:23.428 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:23.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:23.428 --rc genhtml_branch_coverage=1 00:21:23.428 --rc genhtml_function_coverage=1 00:21:23.428 --rc genhtml_legend=1 00:21:23.428 --rc geninfo_all_blocks=1 00:21:23.428 --rc geninfo_unexecuted_blocks=1 00:21:23.428 00:21:23.428 ' 00:21:23.428 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:23.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:23.428 --rc genhtml_branch_coverage=1 00:21:23.428 --rc genhtml_function_coverage=1 00:21:23.428 --rc genhtml_legend=1 00:21:23.428 --rc geninfo_all_blocks=1 00:21:23.428 --rc geninfo_unexecuted_blocks=1 00:21:23.428 00:21:23.428 ' 00:21:23.428 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:23.428 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:21:23.428 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:23.428 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:23.428 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:23.428 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:23.428 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:23.428 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:23.428 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:23.428 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:23.428 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:23.428 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:23.428 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:21:23.428 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=806d442c-08ba-4719-b76d-33169283459a 00:21:23.428 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:23.428 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:23.428 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:23.428 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:23.428 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:23.428 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:21:23.428 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:23.428 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:23.428 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:23.428 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:23.428 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:23.428 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:23.428 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:21:23.428 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:23.428 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:21:23.428 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:23.428 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:23.428 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:23.428 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:23.429 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:23.429 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:23.429 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:23.429 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:23.429 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:23.429 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:23.429 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:21:23.429 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:21:23.429 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:21:23.429 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:21:23.429 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:21:23.429 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:21:23.429 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:21:23.429 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:23.429 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:23.429 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:23.429 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:23.429 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:23.429 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:23.429 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:23.429 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:23.429 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:23.429 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:23.429 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:23.429 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:23.429 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:23.429 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:23.429 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:23.429 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:23.429 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:23.429 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:23.429 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:23.429 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:23.429 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:23.429 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:23.429 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:23.429 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:23.429 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:23.429 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:23.429 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:23.429 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:23.429 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:23.429 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:23.429 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:23.429 Cannot find device "nvmf_init_br" 00:21:23.429 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:21:23.429 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:23.429 Cannot find device "nvmf_init_br2" 00:21:23.429 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:21:23.429 11:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:23.429 Cannot find device "nvmf_tgt_br" 00:21:23.429 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:21:23.429 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:23.769 Cannot find device "nvmf_tgt_br2" 00:21:23.769 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:21:23.769 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:23.769 Cannot find device "nvmf_init_br" 00:21:23.769 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:21:23.769 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:23.769 Cannot find device "nvmf_init_br2" 00:21:23.769 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:21:23.769 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:23.769 Cannot find device "nvmf_tgt_br" 00:21:23.769 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:21:23.769 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:23.769 Cannot find device "nvmf_tgt_br2" 00:21:23.769 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:21:23.769 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:23.769 Cannot find device "nvmf_br" 00:21:23.769 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:21:23.769 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:23.769 Cannot find device "nvmf_init_if" 00:21:23.769 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:21:23.769 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:23.769 Cannot find device "nvmf_init_if2" 00:21:23.769 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:21:23.769 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:23.769 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:23.769 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:21:23.769 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:23.769 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:23.769 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:21:23.769 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:23.769 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:23.769 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:23.769 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:23.769 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:23.769 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:23.769 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:23.769 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:23.769 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:23.769 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:23.769 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:23.769 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:23.769 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:23.769 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:23.769 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:23.769 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:23.769 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:23.769 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:23.769 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:23.769 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:23.769 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:24.067 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:24.067 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:24.068 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:24.068 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:24.068 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:24.068 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:24.068 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:24.068 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:24.068 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:24.068 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:24.068 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:24.068 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:24.068 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:24.068 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:21:24.068 00:21:24.068 --- 10.0.0.3 ping statistics --- 00:21:24.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:24.068 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:21:24.068 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:24.068 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:24.068 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.065 ms 00:21:24.068 00:21:24.068 --- 10.0.0.4 ping statistics --- 00:21:24.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:24.068 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:21:24.068 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:24.068 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:24.068 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.061 ms 00:21:24.068 00:21:24.068 --- 10.0.0.1 ping statistics --- 00:21:24.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:24.068 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:21:24.068 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:24.068 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:24.068 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:21:24.068 00:21:24.068 --- 10.0.0.2 ping statistics --- 00:21:24.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:24.068 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:21:24.068 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:24.068 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@461 -- # return 0 00:21:24.068 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:24.068 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:24.068 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:24.068 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:24.068 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:24.068 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:24.068 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:24.068 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:21:24.068 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:24.068 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:24.068 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:24.068 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=91005 00:21:24.068 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 91005 00:21:24.068 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 91005 ']' 00:21:24.068 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:24.068 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:24.068 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:24.068 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:24.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:24.068 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:24.068 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:24.068 [2024-11-20 11:34:09.464412] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:21:24.068 [2024-11-20 11:34:09.464508] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:24.068 [2024-11-20 11:34:09.611648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:24.068 [2024-11-20 11:34:09.643994] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:24.068 [2024-11-20 11:34:09.644169] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:24.068 [2024-11-20 11:34:09.644303] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:24.068 [2024-11-20 11:34:09.644361] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:24.068 [2024-11-20 11:34:09.644394] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:24.068 [2024-11-20 11:34:09.644801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:24.328 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:24.328 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:21:24.328 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:24.328 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:24.328 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:24.328 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:24.328 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:21:24.328 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.328 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:24.328 [2024-11-20 11:34:09.785476] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:24.328 [2024-11-20 11:34:09.793592] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:21:24.328 null0 00:21:24.328 [2024-11-20 11:34:09.825551] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:24.328 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.328 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=91041 00:21:24.328 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 91041 /tmp/host.sock 00:21:24.328 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:21:24.328 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 91041 ']' 00:21:24.328 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:21:24.328 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:24.328 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:21:24.328 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:21:24.328 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:24.328 11:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:24.328 [2024-11-20 11:34:09.905636] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:21:24.328 [2024-11-20 11:34:09.905762] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91041 ] 00:21:24.588 [2024-11-20 11:34:10.056485] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:24.588 [2024-11-20 11:34:10.097071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:24.588 11:34:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:24.588 11:34:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:21:24.588 11:34:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:24.588 11:34:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:21:24.588 11:34:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.588 11:34:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:24.588 11:34:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.588 11:34:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:21:24.588 11:34:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.588 11:34:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:24.847 11:34:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.847 11:34:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:21:24.847 11:34:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.847 11:34:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:25.781 [2024-11-20 11:34:11.218750] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:21:25.781 [2024-11-20 11:34:11.218809] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:21:25.781 [2024-11-20 11:34:11.218830] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:21:25.781 [2024-11-20 11:34:11.304898] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:21:25.781 [2024-11-20 11:34:11.359388] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:21:25.781 [2024-11-20 11:34:11.360276] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x2206150:1 started. 00:21:25.781 [2024-11-20 11:34:11.361993] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:21:25.781 [2024-11-20 11:34:11.362197] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:21:25.781 [2024-11-20 11:34:11.362238] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:21:25.781 [2024-11-20 11:34:11.362257] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:21:25.781 [2024-11-20 11:34:11.362284] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:21:25.781 11:34:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.781 11:34:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:21:25.781 11:34:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:25.781 11:34:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:25.781 11:34:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:25.781 11:34:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.781 [2024-11-20 11:34:11.367368] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x2206150 was disconnected and freed. delete nvme_qpair. 00:21:25.781 11:34:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:25.781 11:34:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:25.781 11:34:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:26.039 11:34:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.040 11:34:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:21:26.040 11:34:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:21:26.040 11:34:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:21:26.040 11:34:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:21:26.040 11:34:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:26.040 11:34:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:26.040 11:34:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:26.040 11:34:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:26.040 11:34:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:26.040 11:34:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.040 11:34:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:26.040 11:34:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.040 11:34:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:26.040 11:34:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:26.975 11:34:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:26.975 11:34:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:26.975 11:34:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.975 11:34:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:26.975 11:34:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:26.975 11:34:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:26.975 11:34:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:26.975 11:34:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.975 11:34:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:26.975 11:34:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:28.370 11:34:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:28.370 11:34:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:28.370 11:34:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:28.370 11:34:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.370 11:34:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:28.370 11:34:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:28.370 11:34:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:28.370 11:34:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.370 11:34:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:28.370 11:34:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:29.305 11:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:29.305 11:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:29.305 11:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:29.305 11:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.305 11:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:29.305 11:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:29.305 11:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:29.305 11:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.305 11:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:29.305 11:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:30.240 11:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:30.240 11:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:30.240 11:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.240 11:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:30.240 11:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:30.240 11:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:30.240 11:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:30.240 11:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.240 11:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:30.240 11:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:31.175 11:34:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:31.175 11:34:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:31.175 11:34:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:31.175 11:34:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:31.175 11:34:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.175 11:34:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:31.175 11:34:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:31.434 11:34:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.434 [2024-11-20 11:34:16.790098] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:21:31.434 [2024-11-20 11:34:16.790205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.434 [2024-11-20 11:34:16.790231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.434 [2024-11-20 11:34:16.790253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.434 [2024-11-20 11:34:16.790271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.434 [2024-11-20 11:34:16.790288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.434 [2024-11-20 11:34:16.790304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.434 [2024-11-20 11:34:16.790322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.434 [2024-11-20 11:34:16.790338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.434 [2024-11-20 11:34:16.790355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.434 [2024-11-20 11:34:16.790371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.434 [2024-11-20 11:34:16.790386] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e3520 is same with the state(6) to be set 00:21:31.434 11:34:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:31.434 11:34:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:31.434 [2024-11-20 11:34:16.800091] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21e3520 (9): Bad file descriptor 00:21:31.434 [2024-11-20 11:34:16.810145] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:31.434 [2024-11-20 11:34:16.810212] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:31.434 [2024-11-20 11:34:16.810225] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:31.434 [2024-11-20 11:34:16.810232] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:31.434 [2024-11-20 11:34:16.810288] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:32.372 11:34:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:32.372 11:34:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:32.372 11:34:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.372 11:34:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:32.372 11:34:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:32.372 11:34:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:32.372 11:34:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:32.372 [2024-11-20 11:34:17.852693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:21:32.372 [2024-11-20 11:34:17.852841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21e3520 with addr=10.0.0.3, port=4420 00:21:32.372 [2024-11-20 11:34:17.852882] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e3520 is same with the state(6) to be set 00:21:32.372 [2024-11-20 11:34:17.852950] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21e3520 (9): Bad file descriptor 00:21:32.372 [2024-11-20 11:34:17.853776] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:21:32.372 [2024-11-20 11:34:17.853875] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:32.372 [2024-11-20 11:34:17.853903] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:32.372 [2024-11-20 11:34:17.853928] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:32.372 [2024-11-20 11:34:17.853949] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:32.372 [2024-11-20 11:34:17.853963] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:32.372 [2024-11-20 11:34:17.853973] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:32.372 [2024-11-20 11:34:17.853993] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:32.372 [2024-11-20 11:34:17.854004] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:32.372 11:34:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.372 11:34:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:32.372 11:34:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:33.335 [2024-11-20 11:34:18.854063] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:33.335 [2024-11-20 11:34:18.854123] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:33.335 [2024-11-20 11:34:18.854166] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:33.335 [2024-11-20 11:34:18.854179] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:33.335 [2024-11-20 11:34:18.854192] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:21:33.335 [2024-11-20 11:34:18.854202] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:33.335 [2024-11-20 11:34:18.854209] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:33.335 [2024-11-20 11:34:18.854215] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:33.335 [2024-11-20 11:34:18.854276] bdev_nvme.c:7230:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:21:33.335 [2024-11-20 11:34:18.854335] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:33.335 [2024-11-20 11:34:18.854352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.335 [2024-11-20 11:34:18.854365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:33.335 [2024-11-20 11:34:18.854375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.335 [2024-11-20 11:34:18.854386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:33.335 [2024-11-20 11:34:18.854400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.335 [2024-11-20 11:34:18.854418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:33.335 [2024-11-20 11:34:18.854436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.335 [2024-11-20 11:34:18.854452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:33.335 [2024-11-20 11:34:18.854462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.335 [2024-11-20 11:34:18.854472] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:21:33.335 [2024-11-20 11:34:18.854493] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21702d0 (9): Bad file descriptor 00:21:33.335 [2024-11-20 11:34:18.855138] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:21:33.335 [2024-11-20 11:34:18.855171] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:21:33.335 11:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:33.335 11:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:33.335 11:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.335 11:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:33.335 11:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:33.335 11:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:33.335 11:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:33.335 11:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.594 11:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:21:33.594 11:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:33.594 11:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:33.594 11:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:21:33.594 11:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:33.594 11:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:33.594 11:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.594 11:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:33.594 11:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:33.594 11:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:33.594 11:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:33.594 11:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.594 11:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:21:33.594 11:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:34.535 11:34:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:34.535 11:34:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:34.535 11:34:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:34.536 11:34:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:34.536 11:34:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:34.536 11:34:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.536 11:34:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:34.536 11:34:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.536 11:34:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:21:34.536 11:34:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:35.468 [2024-11-20 11:34:20.861172] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:21:35.468 [2024-11-20 11:34:20.861210] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:21:35.468 [2024-11-20 11:34:20.861232] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:21:35.468 [2024-11-20 11:34:20.947312] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:21:35.468 [2024-11-20 11:34:21.002206] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4420 00:21:35.468 [2024-11-20 11:34:21.002945] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x21dd100:1 started. 00:21:35.468 [2024-11-20 11:34:21.004225] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:21:35.468 [2024-11-20 11:34:21.004276] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:21:35.468 [2024-11-20 11:34:21.004304] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:21:35.468 [2024-11-20 11:34:21.004326] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:21:35.468 [2024-11-20 11:34:21.004339] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:21:35.468 [2024-11-20 11:34:21.009858] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x21dd100 was disconnected and freed. delete nvme_qpair. 00:21:35.727 11:34:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:35.727 11:34:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:35.727 11:34:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.727 11:34:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:35.727 11:34:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:35.727 11:34:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:35.727 11:34:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:35.727 11:34:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.727 11:34:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:21:35.727 11:34:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:21:35.727 11:34:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 91041 00:21:35.727 11:34:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 91041 ']' 00:21:35.727 11:34:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 91041 00:21:35.727 11:34:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:21:35.727 11:34:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:35.727 11:34:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91041 00:21:35.727 killing process with pid 91041 00:21:35.727 11:34:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:35.727 11:34:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:35.727 11:34:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91041' 00:21:35.727 11:34:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 91041 00:21:35.727 11:34:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 91041 00:21:35.986 11:34:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:21:35.986 11:34:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:35.986 11:34:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:21:35.986 11:34:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:35.986 11:34:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:21:35.986 11:34:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:35.986 11:34:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:35.986 rmmod nvme_tcp 00:21:35.986 rmmod nvme_fabrics 00:21:35.986 rmmod nvme_keyring 00:21:35.986 11:34:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:35.986 11:34:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:21:35.986 11:34:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:21:35.986 11:34:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 91005 ']' 00:21:35.986 11:34:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 91005 00:21:35.986 11:34:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 91005 ']' 00:21:35.986 11:34:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 91005 00:21:35.986 11:34:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:21:35.986 11:34:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:35.986 11:34:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91005 00:21:35.986 killing process with pid 91005 00:21:35.986 11:34:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:35.986 11:34:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:35.986 11:34:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91005' 00:21:35.986 11:34:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 91005 00:21:35.986 11:34:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 91005 00:21:36.244 11:34:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:36.244 11:34:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:36.244 11:34:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:36.244 11:34:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:21:36.244 11:34:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:36.244 11:34:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:21:36.244 11:34:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:21:36.244 11:34:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:36.244 11:34:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:36.244 11:34:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:36.244 11:34:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:36.244 11:34:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:36.244 11:34:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:36.244 11:34:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:36.244 11:34:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:36.244 11:34:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:36.244 11:34:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:36.244 11:34:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:36.244 11:34:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:36.244 11:34:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:36.244 11:34:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:36.245 11:34:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:36.245 11:34:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:36.245 11:34:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:36.245 11:34:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:36.245 11:34:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:36.504 11:34:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:21:36.504 00:21:36.504 real 0m13.092s 00:21:36.504 user 0m23.081s 00:21:36.504 sys 0m1.582s 00:21:36.504 11:34:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:36.504 11:34:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:36.504 ************************************ 00:21:36.504 END TEST nvmf_discovery_remove_ifc 00:21:36.504 ************************************ 00:21:36.504 11:34:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:21:36.504 11:34:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:36.504 11:34:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:36.504 11:34:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:36.504 ************************************ 00:21:36.504 START TEST nvmf_identify_kernel_target 00:21:36.504 ************************************ 00:21:36.504 11:34:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:21:36.504 * Looking for test storage... 00:21:36.504 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:36.504 11:34:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:36.504 11:34:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:21:36.504 11:34:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:36.504 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:36.504 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:36.504 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:36.504 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:36.504 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:21:36.504 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:21:36.504 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:21:36.504 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:21:36.504 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:21:36.504 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:21:36.504 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:21:36.504 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:36.504 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:21:36.504 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:21:36.504 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:36.504 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:36.504 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:21:36.504 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:21:36.504 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:36.504 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:21:36.504 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:21:36.504 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:21:36.504 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:21:36.504 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:36.504 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:21:36.504 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:21:36.504 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:36.504 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:36.504 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:21:36.504 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:36.504 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:36.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:36.504 --rc genhtml_branch_coverage=1 00:21:36.504 --rc genhtml_function_coverage=1 00:21:36.504 --rc genhtml_legend=1 00:21:36.504 --rc geninfo_all_blocks=1 00:21:36.504 --rc geninfo_unexecuted_blocks=1 00:21:36.504 00:21:36.504 ' 00:21:36.504 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:36.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:36.504 --rc genhtml_branch_coverage=1 00:21:36.504 --rc genhtml_function_coverage=1 00:21:36.504 --rc genhtml_legend=1 00:21:36.504 --rc geninfo_all_blocks=1 00:21:36.504 --rc geninfo_unexecuted_blocks=1 00:21:36.504 00:21:36.504 ' 00:21:36.504 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:36.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:36.504 --rc genhtml_branch_coverage=1 00:21:36.504 --rc genhtml_function_coverage=1 00:21:36.504 --rc genhtml_legend=1 00:21:36.504 --rc geninfo_all_blocks=1 00:21:36.505 --rc geninfo_unexecuted_blocks=1 00:21:36.505 00:21:36.505 ' 00:21:36.505 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:36.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:36.505 --rc genhtml_branch_coverage=1 00:21:36.505 --rc genhtml_function_coverage=1 00:21:36.505 --rc genhtml_legend=1 00:21:36.505 --rc geninfo_all_blocks=1 00:21:36.505 --rc geninfo_unexecuted_blocks=1 00:21:36.505 00:21:36.505 ' 00:21:36.505 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:36.505 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:21:36.505 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:36.505 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:36.505 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:36.505 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:36.505 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:36.505 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:36.505 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:36.505 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:36.505 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:36.505 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:36.505 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:21:36.505 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=806d442c-08ba-4719-b76d-33169283459a 00:21:36.505 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:36.505 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:36.505 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:36.505 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:36.505 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:36.505 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:21:36.505 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:36.505 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:36.505 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:36.505 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.505 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.505 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.505 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:21:36.505 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.505 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:21:36.505 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:36.505 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:36.505 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:36.505 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:36.505 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:36.505 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:36.505 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:36.505 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:36.505 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:36.505 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:36.505 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:21:36.505 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:36.505 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:36.505 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:36.505 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:36.505 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:36.505 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:36.505 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:36.505 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:36.505 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:36.505 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:36.505 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:36.505 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:36.505 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:36.505 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:36.505 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:36.505 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:36.505 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:36.505 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:36.505 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:36.505 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:36.505 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:36.505 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:36.505 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:36.505 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:36.505 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:36.505 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:36.505 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:36.505 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:36.505 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:36.505 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:36.505 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:36.505 Cannot find device "nvmf_init_br" 00:21:36.505 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:21:36.505 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:36.505 Cannot find device "nvmf_init_br2" 00:21:36.505 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:21:36.505 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:36.764 Cannot find device "nvmf_tgt_br" 00:21:36.764 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:21:36.764 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:36.764 Cannot find device "nvmf_tgt_br2" 00:21:36.764 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:21:36.764 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:36.764 Cannot find device "nvmf_init_br" 00:21:36.764 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:21:36.764 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:36.764 Cannot find device "nvmf_init_br2" 00:21:36.764 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:21:36.764 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:36.764 Cannot find device "nvmf_tgt_br" 00:21:36.764 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:21:36.764 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:36.764 Cannot find device "nvmf_tgt_br2" 00:21:36.764 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:21:36.764 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:36.764 Cannot find device "nvmf_br" 00:21:36.764 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:21:36.764 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:36.764 Cannot find device "nvmf_init_if" 00:21:36.764 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:21:36.764 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:36.764 Cannot find device "nvmf_init_if2" 00:21:36.764 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:21:36.764 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:36.764 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:36.764 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:21:36.764 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:36.764 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:36.765 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:21:36.765 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:36.765 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:36.765 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:36.765 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:36.765 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:36.765 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:36.765 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:36.765 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:36.765 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:36.765 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:36.765 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:36.765 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:36.765 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:36.765 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:36.765 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:36.765 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:36.765 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:36.765 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:36.765 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:36.765 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:36.765 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:36.765 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:36.765 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:36.765 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:37.024 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:37.024 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:37.024 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:37.024 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:37.024 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:37.024 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:37.024 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:37.024 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:37.024 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:37.024 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:37.024 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 00:21:37.024 00:21:37.024 --- 10.0.0.3 ping statistics --- 00:21:37.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:37.024 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:21:37.024 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:37.024 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:37.024 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.039 ms 00:21:37.024 00:21:37.024 --- 10.0.0.4 ping statistics --- 00:21:37.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:37.024 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:21:37.024 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:37.024 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:37.024 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:21:37.024 00:21:37.024 --- 10.0.0.1 ping statistics --- 00:21:37.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:37.024 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:21:37.024 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:37.024 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:37.024 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:21:37.024 00:21:37.024 --- 10.0.0.2 ping statistics --- 00:21:37.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:37.024 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:21:37.024 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:37.024 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # return 0 00:21:37.024 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:37.024 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:37.024 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:37.024 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:37.024 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:37.024 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:37.024 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:37.024 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:21:37.024 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:21:37.024 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:21:37.024 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:37.024 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:37.024 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:37.024 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:37.024 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:37.024 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:37.024 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:37.024 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:37.024 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:37.024 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:21:37.024 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:21:37.024 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:21:37.024 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:21:37.024 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:37.024 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:37.024 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:21:37.024 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:21:37.024 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:21:37.024 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:21:37.024 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:21:37.024 11:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:37.283 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:37.283 Waiting for block devices as requested 00:21:37.283 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:37.542 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:37.542 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:37.542 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:21:37.542 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:21:37.542 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:21:37.542 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:21:37.542 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:37.542 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:21:37.542 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:21:37.542 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:21:37.542 No valid GPT data, bailing 00:21:37.542 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:21:37.542 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:21:37.542 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:21:37.542 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:21:37.542 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:37.542 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:21:37.542 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:21:37.542 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:21:37.542 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:21:37.542 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:37.542 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:21:37.542 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:21:37.542 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:21:37.801 No valid GPT data, bailing 00:21:37.801 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:21:37.801 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:21:37.801 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:21:37.801 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:21:37.801 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:37.801 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:21:37.801 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:21:37.801 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:21:37.801 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:21:37.801 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:37.801 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:21:37.801 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:21:37.801 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:21:37.801 No valid GPT data, bailing 00:21:37.801 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:21:37.801 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:21:37.802 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:21:37.802 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:21:37.802 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:37.802 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:21:37.802 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:21:37.802 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:21:37.802 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:21:37.802 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:37.802 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:21:37.802 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:21:37.802 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:21:37.802 No valid GPT data, bailing 00:21:37.802 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:21:37.802 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:21:37.802 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:21:37.802 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:21:37.802 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:21:37.802 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:37.802 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:37.802 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:21:37.802 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:21:37.802 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:21:37.802 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:21:37.802 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:21:37.802 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:21:37.802 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:21:37.802 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:21:37.802 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:21:37.802 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:21:37.802 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid=806d442c-08ba-4719-b76d-33169283459a -a 10.0.0.1 -t tcp -s 4420 00:21:37.802 00:21:37.802 Discovery Log Number of Records 2, Generation counter 2 00:21:37.802 =====Discovery Log Entry 0====== 00:21:37.802 trtype: tcp 00:21:37.802 adrfam: ipv4 00:21:37.802 subtype: current discovery subsystem 00:21:37.802 treq: not specified, sq flow control disable supported 00:21:37.802 portid: 1 00:21:37.802 trsvcid: 4420 00:21:37.802 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:21:37.802 traddr: 10.0.0.1 00:21:37.802 eflags: none 00:21:37.802 sectype: none 00:21:37.802 =====Discovery Log Entry 1====== 00:21:37.802 trtype: tcp 00:21:37.802 adrfam: ipv4 00:21:37.802 subtype: nvme subsystem 00:21:37.802 treq: not specified, sq flow control disable supported 00:21:37.802 portid: 1 00:21:37.802 trsvcid: 4420 00:21:37.802 subnqn: nqn.2016-06.io.spdk:testnqn 00:21:37.802 traddr: 10.0.0.1 00:21:37.802 eflags: none 00:21:37.802 sectype: none 00:21:37.802 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:21:37.802 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:21:38.061 ===================================================== 00:21:38.061 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:21:38.061 ===================================================== 00:21:38.061 Controller Capabilities/Features 00:21:38.061 ================================ 00:21:38.061 Vendor ID: 0000 00:21:38.061 Subsystem Vendor ID: 0000 00:21:38.061 Serial Number: 97ee52862fab9d2e378f 00:21:38.061 Model Number: Linux 00:21:38.061 Firmware Version: 6.8.9-20 00:21:38.061 Recommended Arb Burst: 0 00:21:38.061 IEEE OUI Identifier: 00 00 00 00:21:38.061 Multi-path I/O 00:21:38.061 May have multiple subsystem ports: No 00:21:38.061 May have multiple controllers: No 00:21:38.061 Associated with SR-IOV VF: No 00:21:38.061 Max Data Transfer Size: Unlimited 00:21:38.061 Max Number of Namespaces: 0 00:21:38.061 Max Number of I/O Queues: 1024 00:21:38.061 NVMe Specification Version (VS): 1.3 00:21:38.061 NVMe Specification Version (Identify): 1.3 00:21:38.061 Maximum Queue Entries: 1024 00:21:38.061 Contiguous Queues Required: No 00:21:38.061 Arbitration Mechanisms Supported 00:21:38.061 Weighted Round Robin: Not Supported 00:21:38.061 Vendor Specific: Not Supported 00:21:38.061 Reset Timeout: 7500 ms 00:21:38.061 Doorbell Stride: 4 bytes 00:21:38.061 NVM Subsystem Reset: Not Supported 00:21:38.061 Command Sets Supported 00:21:38.061 NVM Command Set: Supported 00:21:38.061 Boot Partition: Not Supported 00:21:38.061 Memory Page Size Minimum: 4096 bytes 00:21:38.061 Memory Page Size Maximum: 4096 bytes 00:21:38.061 Persistent Memory Region: Not Supported 00:21:38.061 Optional Asynchronous Events Supported 00:21:38.061 Namespace Attribute Notices: Not Supported 00:21:38.061 Firmware Activation Notices: Not Supported 00:21:38.061 ANA Change Notices: Not Supported 00:21:38.061 PLE Aggregate Log Change Notices: Not Supported 00:21:38.061 LBA Status Info Alert Notices: Not Supported 00:21:38.061 EGE Aggregate Log Change Notices: Not Supported 00:21:38.061 Normal NVM Subsystem Shutdown event: Not Supported 00:21:38.061 Zone Descriptor Change Notices: Not Supported 00:21:38.061 Discovery Log Change Notices: Supported 00:21:38.061 Controller Attributes 00:21:38.061 128-bit Host Identifier: Not Supported 00:21:38.061 Non-Operational Permissive Mode: Not Supported 00:21:38.061 NVM Sets: Not Supported 00:21:38.061 Read Recovery Levels: Not Supported 00:21:38.061 Endurance Groups: Not Supported 00:21:38.061 Predictable Latency Mode: Not Supported 00:21:38.061 Traffic Based Keep ALive: Not Supported 00:21:38.061 Namespace Granularity: Not Supported 00:21:38.061 SQ Associations: Not Supported 00:21:38.061 UUID List: Not Supported 00:21:38.061 Multi-Domain Subsystem: Not Supported 00:21:38.061 Fixed Capacity Management: Not Supported 00:21:38.061 Variable Capacity Management: Not Supported 00:21:38.061 Delete Endurance Group: Not Supported 00:21:38.061 Delete NVM Set: Not Supported 00:21:38.061 Extended LBA Formats Supported: Not Supported 00:21:38.061 Flexible Data Placement Supported: Not Supported 00:21:38.061 00:21:38.061 Controller Memory Buffer Support 00:21:38.061 ================================ 00:21:38.061 Supported: No 00:21:38.061 00:21:38.061 Persistent Memory Region Support 00:21:38.061 ================================ 00:21:38.061 Supported: No 00:21:38.061 00:21:38.061 Admin Command Set Attributes 00:21:38.061 ============================ 00:21:38.061 Security Send/Receive: Not Supported 00:21:38.061 Format NVM: Not Supported 00:21:38.061 Firmware Activate/Download: Not Supported 00:21:38.061 Namespace Management: Not Supported 00:21:38.061 Device Self-Test: Not Supported 00:21:38.061 Directives: Not Supported 00:21:38.061 NVMe-MI: Not Supported 00:21:38.062 Virtualization Management: Not Supported 00:21:38.062 Doorbell Buffer Config: Not Supported 00:21:38.062 Get LBA Status Capability: Not Supported 00:21:38.062 Command & Feature Lockdown Capability: Not Supported 00:21:38.062 Abort Command Limit: 1 00:21:38.062 Async Event Request Limit: 1 00:21:38.062 Number of Firmware Slots: N/A 00:21:38.062 Firmware Slot 1 Read-Only: N/A 00:21:38.062 Firmware Activation Without Reset: N/A 00:21:38.062 Multiple Update Detection Support: N/A 00:21:38.062 Firmware Update Granularity: No Information Provided 00:21:38.062 Per-Namespace SMART Log: No 00:21:38.062 Asymmetric Namespace Access Log Page: Not Supported 00:21:38.062 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:21:38.062 Command Effects Log Page: Not Supported 00:21:38.062 Get Log Page Extended Data: Supported 00:21:38.062 Telemetry Log Pages: Not Supported 00:21:38.062 Persistent Event Log Pages: Not Supported 00:21:38.062 Supported Log Pages Log Page: May Support 00:21:38.062 Commands Supported & Effects Log Page: Not Supported 00:21:38.062 Feature Identifiers & Effects Log Page:May Support 00:21:38.062 NVMe-MI Commands & Effects Log Page: May Support 00:21:38.062 Data Area 4 for Telemetry Log: Not Supported 00:21:38.062 Error Log Page Entries Supported: 1 00:21:38.062 Keep Alive: Not Supported 00:21:38.062 00:21:38.062 NVM Command Set Attributes 00:21:38.062 ========================== 00:21:38.062 Submission Queue Entry Size 00:21:38.062 Max: 1 00:21:38.062 Min: 1 00:21:38.062 Completion Queue Entry Size 00:21:38.062 Max: 1 00:21:38.062 Min: 1 00:21:38.062 Number of Namespaces: 0 00:21:38.062 Compare Command: Not Supported 00:21:38.062 Write Uncorrectable Command: Not Supported 00:21:38.062 Dataset Management Command: Not Supported 00:21:38.062 Write Zeroes Command: Not Supported 00:21:38.062 Set Features Save Field: Not Supported 00:21:38.062 Reservations: Not Supported 00:21:38.062 Timestamp: Not Supported 00:21:38.062 Copy: Not Supported 00:21:38.062 Volatile Write Cache: Not Present 00:21:38.062 Atomic Write Unit (Normal): 1 00:21:38.062 Atomic Write Unit (PFail): 1 00:21:38.062 Atomic Compare & Write Unit: 1 00:21:38.062 Fused Compare & Write: Not Supported 00:21:38.062 Scatter-Gather List 00:21:38.062 SGL Command Set: Supported 00:21:38.062 SGL Keyed: Not Supported 00:21:38.062 SGL Bit Bucket Descriptor: Not Supported 00:21:38.062 SGL Metadata Pointer: Not Supported 00:21:38.062 Oversized SGL: Not Supported 00:21:38.062 SGL Metadata Address: Not Supported 00:21:38.062 SGL Offset: Supported 00:21:38.062 Transport SGL Data Block: Not Supported 00:21:38.062 Replay Protected Memory Block: Not Supported 00:21:38.062 00:21:38.062 Firmware Slot Information 00:21:38.062 ========================= 00:21:38.062 Active slot: 0 00:21:38.062 00:21:38.062 00:21:38.062 Error Log 00:21:38.062 ========= 00:21:38.062 00:21:38.062 Active Namespaces 00:21:38.062 ================= 00:21:38.062 Discovery Log Page 00:21:38.062 ================== 00:21:38.062 Generation Counter: 2 00:21:38.062 Number of Records: 2 00:21:38.062 Record Format: 0 00:21:38.062 00:21:38.062 Discovery Log Entry 0 00:21:38.062 ---------------------- 00:21:38.062 Transport Type: 3 (TCP) 00:21:38.062 Address Family: 1 (IPv4) 00:21:38.062 Subsystem Type: 3 (Current Discovery Subsystem) 00:21:38.062 Entry Flags: 00:21:38.062 Duplicate Returned Information: 0 00:21:38.062 Explicit Persistent Connection Support for Discovery: 0 00:21:38.062 Transport Requirements: 00:21:38.062 Secure Channel: Not Specified 00:21:38.062 Port ID: 1 (0x0001) 00:21:38.062 Controller ID: 65535 (0xffff) 00:21:38.062 Admin Max SQ Size: 32 00:21:38.062 Transport Service Identifier: 4420 00:21:38.062 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:21:38.062 Transport Address: 10.0.0.1 00:21:38.062 Discovery Log Entry 1 00:21:38.062 ---------------------- 00:21:38.062 Transport Type: 3 (TCP) 00:21:38.062 Address Family: 1 (IPv4) 00:21:38.062 Subsystem Type: 2 (NVM Subsystem) 00:21:38.062 Entry Flags: 00:21:38.062 Duplicate Returned Information: 0 00:21:38.062 Explicit Persistent Connection Support for Discovery: 0 00:21:38.062 Transport Requirements: 00:21:38.062 Secure Channel: Not Specified 00:21:38.062 Port ID: 1 (0x0001) 00:21:38.062 Controller ID: 65535 (0xffff) 00:21:38.062 Admin Max SQ Size: 32 00:21:38.062 Transport Service Identifier: 4420 00:21:38.062 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:21:38.062 Transport Address: 10.0.0.1 00:21:38.062 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:38.322 get_feature(0x01) failed 00:21:38.322 get_feature(0x02) failed 00:21:38.322 get_feature(0x04) failed 00:21:38.322 ===================================================== 00:21:38.322 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:38.322 ===================================================== 00:21:38.322 Controller Capabilities/Features 00:21:38.322 ================================ 00:21:38.322 Vendor ID: 0000 00:21:38.322 Subsystem Vendor ID: 0000 00:21:38.322 Serial Number: 5feb7beda398f04c1ec9 00:21:38.322 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:21:38.322 Firmware Version: 6.8.9-20 00:21:38.322 Recommended Arb Burst: 6 00:21:38.322 IEEE OUI Identifier: 00 00 00 00:21:38.322 Multi-path I/O 00:21:38.322 May have multiple subsystem ports: Yes 00:21:38.322 May have multiple controllers: Yes 00:21:38.322 Associated with SR-IOV VF: No 00:21:38.322 Max Data Transfer Size: Unlimited 00:21:38.322 Max Number of Namespaces: 1024 00:21:38.322 Max Number of I/O Queues: 128 00:21:38.322 NVMe Specification Version (VS): 1.3 00:21:38.322 NVMe Specification Version (Identify): 1.3 00:21:38.322 Maximum Queue Entries: 1024 00:21:38.322 Contiguous Queues Required: No 00:21:38.322 Arbitration Mechanisms Supported 00:21:38.322 Weighted Round Robin: Not Supported 00:21:38.322 Vendor Specific: Not Supported 00:21:38.322 Reset Timeout: 7500 ms 00:21:38.322 Doorbell Stride: 4 bytes 00:21:38.322 NVM Subsystem Reset: Not Supported 00:21:38.322 Command Sets Supported 00:21:38.322 NVM Command Set: Supported 00:21:38.322 Boot Partition: Not Supported 00:21:38.322 Memory Page Size Minimum: 4096 bytes 00:21:38.322 Memory Page Size Maximum: 4096 bytes 00:21:38.322 Persistent Memory Region: Not Supported 00:21:38.322 Optional Asynchronous Events Supported 00:21:38.322 Namespace Attribute Notices: Supported 00:21:38.322 Firmware Activation Notices: Not Supported 00:21:38.322 ANA Change Notices: Supported 00:21:38.322 PLE Aggregate Log Change Notices: Not Supported 00:21:38.322 LBA Status Info Alert Notices: Not Supported 00:21:38.322 EGE Aggregate Log Change Notices: Not Supported 00:21:38.322 Normal NVM Subsystem Shutdown event: Not Supported 00:21:38.322 Zone Descriptor Change Notices: Not Supported 00:21:38.322 Discovery Log Change Notices: Not Supported 00:21:38.322 Controller Attributes 00:21:38.322 128-bit Host Identifier: Supported 00:21:38.322 Non-Operational Permissive Mode: Not Supported 00:21:38.322 NVM Sets: Not Supported 00:21:38.322 Read Recovery Levels: Not Supported 00:21:38.322 Endurance Groups: Not Supported 00:21:38.322 Predictable Latency Mode: Not Supported 00:21:38.322 Traffic Based Keep ALive: Supported 00:21:38.322 Namespace Granularity: Not Supported 00:21:38.322 SQ Associations: Not Supported 00:21:38.322 UUID List: Not Supported 00:21:38.322 Multi-Domain Subsystem: Not Supported 00:21:38.322 Fixed Capacity Management: Not Supported 00:21:38.322 Variable Capacity Management: Not Supported 00:21:38.322 Delete Endurance Group: Not Supported 00:21:38.322 Delete NVM Set: Not Supported 00:21:38.322 Extended LBA Formats Supported: Not Supported 00:21:38.322 Flexible Data Placement Supported: Not Supported 00:21:38.322 00:21:38.322 Controller Memory Buffer Support 00:21:38.322 ================================ 00:21:38.322 Supported: No 00:21:38.322 00:21:38.322 Persistent Memory Region Support 00:21:38.322 ================================ 00:21:38.322 Supported: No 00:21:38.322 00:21:38.322 Admin Command Set Attributes 00:21:38.322 ============================ 00:21:38.322 Security Send/Receive: Not Supported 00:21:38.322 Format NVM: Not Supported 00:21:38.322 Firmware Activate/Download: Not Supported 00:21:38.322 Namespace Management: Not Supported 00:21:38.322 Device Self-Test: Not Supported 00:21:38.322 Directives: Not Supported 00:21:38.322 NVMe-MI: Not Supported 00:21:38.322 Virtualization Management: Not Supported 00:21:38.322 Doorbell Buffer Config: Not Supported 00:21:38.322 Get LBA Status Capability: Not Supported 00:21:38.322 Command & Feature Lockdown Capability: Not Supported 00:21:38.322 Abort Command Limit: 4 00:21:38.322 Async Event Request Limit: 4 00:21:38.322 Number of Firmware Slots: N/A 00:21:38.322 Firmware Slot 1 Read-Only: N/A 00:21:38.322 Firmware Activation Without Reset: N/A 00:21:38.322 Multiple Update Detection Support: N/A 00:21:38.322 Firmware Update Granularity: No Information Provided 00:21:38.322 Per-Namespace SMART Log: Yes 00:21:38.322 Asymmetric Namespace Access Log Page: Supported 00:21:38.322 ANA Transition Time : 10 sec 00:21:38.322 00:21:38.322 Asymmetric Namespace Access Capabilities 00:21:38.322 ANA Optimized State : Supported 00:21:38.322 ANA Non-Optimized State : Supported 00:21:38.322 ANA Inaccessible State : Supported 00:21:38.322 ANA Persistent Loss State : Supported 00:21:38.322 ANA Change State : Supported 00:21:38.322 ANAGRPID is not changed : No 00:21:38.322 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:21:38.322 00:21:38.322 ANA Group Identifier Maximum : 128 00:21:38.322 Number of ANA Group Identifiers : 128 00:21:38.322 Max Number of Allowed Namespaces : 1024 00:21:38.322 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:21:38.322 Command Effects Log Page: Supported 00:21:38.322 Get Log Page Extended Data: Supported 00:21:38.322 Telemetry Log Pages: Not Supported 00:21:38.322 Persistent Event Log Pages: Not Supported 00:21:38.322 Supported Log Pages Log Page: May Support 00:21:38.322 Commands Supported & Effects Log Page: Not Supported 00:21:38.322 Feature Identifiers & Effects Log Page:May Support 00:21:38.322 NVMe-MI Commands & Effects Log Page: May Support 00:21:38.322 Data Area 4 for Telemetry Log: Not Supported 00:21:38.322 Error Log Page Entries Supported: 128 00:21:38.322 Keep Alive: Supported 00:21:38.322 Keep Alive Granularity: 1000 ms 00:21:38.322 00:21:38.322 NVM Command Set Attributes 00:21:38.322 ========================== 00:21:38.322 Submission Queue Entry Size 00:21:38.322 Max: 64 00:21:38.322 Min: 64 00:21:38.322 Completion Queue Entry Size 00:21:38.322 Max: 16 00:21:38.322 Min: 16 00:21:38.322 Number of Namespaces: 1024 00:21:38.322 Compare Command: Not Supported 00:21:38.322 Write Uncorrectable Command: Not Supported 00:21:38.322 Dataset Management Command: Supported 00:21:38.322 Write Zeroes Command: Supported 00:21:38.322 Set Features Save Field: Not Supported 00:21:38.322 Reservations: Not Supported 00:21:38.322 Timestamp: Not Supported 00:21:38.322 Copy: Not Supported 00:21:38.322 Volatile Write Cache: Present 00:21:38.322 Atomic Write Unit (Normal): 1 00:21:38.322 Atomic Write Unit (PFail): 1 00:21:38.322 Atomic Compare & Write Unit: 1 00:21:38.322 Fused Compare & Write: Not Supported 00:21:38.322 Scatter-Gather List 00:21:38.322 SGL Command Set: Supported 00:21:38.322 SGL Keyed: Not Supported 00:21:38.322 SGL Bit Bucket Descriptor: Not Supported 00:21:38.322 SGL Metadata Pointer: Not Supported 00:21:38.322 Oversized SGL: Not Supported 00:21:38.322 SGL Metadata Address: Not Supported 00:21:38.322 SGL Offset: Supported 00:21:38.322 Transport SGL Data Block: Not Supported 00:21:38.322 Replay Protected Memory Block: Not Supported 00:21:38.322 00:21:38.322 Firmware Slot Information 00:21:38.322 ========================= 00:21:38.322 Active slot: 0 00:21:38.322 00:21:38.322 Asymmetric Namespace Access 00:21:38.322 =========================== 00:21:38.322 Change Count : 0 00:21:38.322 Number of ANA Group Descriptors : 1 00:21:38.322 ANA Group Descriptor : 0 00:21:38.322 ANA Group ID : 1 00:21:38.322 Number of NSID Values : 1 00:21:38.322 Change Count : 0 00:21:38.322 ANA State : 1 00:21:38.322 Namespace Identifier : 1 00:21:38.322 00:21:38.322 Commands Supported and Effects 00:21:38.322 ============================== 00:21:38.322 Admin Commands 00:21:38.322 -------------- 00:21:38.322 Get Log Page (02h): Supported 00:21:38.322 Identify (06h): Supported 00:21:38.322 Abort (08h): Supported 00:21:38.322 Set Features (09h): Supported 00:21:38.322 Get Features (0Ah): Supported 00:21:38.323 Asynchronous Event Request (0Ch): Supported 00:21:38.323 Keep Alive (18h): Supported 00:21:38.323 I/O Commands 00:21:38.323 ------------ 00:21:38.323 Flush (00h): Supported 00:21:38.323 Write (01h): Supported LBA-Change 00:21:38.323 Read (02h): Supported 00:21:38.323 Write Zeroes (08h): Supported LBA-Change 00:21:38.323 Dataset Management (09h): Supported 00:21:38.323 00:21:38.323 Error Log 00:21:38.323 ========= 00:21:38.323 Entry: 0 00:21:38.323 Error Count: 0x3 00:21:38.323 Submission Queue Id: 0x0 00:21:38.323 Command Id: 0x5 00:21:38.323 Phase Bit: 0 00:21:38.323 Status Code: 0x2 00:21:38.323 Status Code Type: 0x0 00:21:38.323 Do Not Retry: 1 00:21:38.323 Error Location: 0x28 00:21:38.323 LBA: 0x0 00:21:38.323 Namespace: 0x0 00:21:38.323 Vendor Log Page: 0x0 00:21:38.323 ----------- 00:21:38.323 Entry: 1 00:21:38.323 Error Count: 0x2 00:21:38.323 Submission Queue Id: 0x0 00:21:38.323 Command Id: 0x5 00:21:38.323 Phase Bit: 0 00:21:38.323 Status Code: 0x2 00:21:38.323 Status Code Type: 0x0 00:21:38.323 Do Not Retry: 1 00:21:38.323 Error Location: 0x28 00:21:38.323 LBA: 0x0 00:21:38.323 Namespace: 0x0 00:21:38.323 Vendor Log Page: 0x0 00:21:38.323 ----------- 00:21:38.323 Entry: 2 00:21:38.323 Error Count: 0x1 00:21:38.323 Submission Queue Id: 0x0 00:21:38.323 Command Id: 0x4 00:21:38.323 Phase Bit: 0 00:21:38.323 Status Code: 0x2 00:21:38.323 Status Code Type: 0x0 00:21:38.323 Do Not Retry: 1 00:21:38.323 Error Location: 0x28 00:21:38.323 LBA: 0x0 00:21:38.323 Namespace: 0x0 00:21:38.323 Vendor Log Page: 0x0 00:21:38.323 00:21:38.323 Number of Queues 00:21:38.323 ================ 00:21:38.323 Number of I/O Submission Queues: 128 00:21:38.323 Number of I/O Completion Queues: 128 00:21:38.323 00:21:38.323 ZNS Specific Controller Data 00:21:38.323 ============================ 00:21:38.323 Zone Append Size Limit: 0 00:21:38.323 00:21:38.323 00:21:38.323 Active Namespaces 00:21:38.323 ================= 00:21:38.323 get_feature(0x05) failed 00:21:38.323 Namespace ID:1 00:21:38.323 Command Set Identifier: NVM (00h) 00:21:38.323 Deallocate: Supported 00:21:38.323 Deallocated/Unwritten Error: Not Supported 00:21:38.323 Deallocated Read Value: Unknown 00:21:38.323 Deallocate in Write Zeroes: Not Supported 00:21:38.323 Deallocated Guard Field: 0xFFFF 00:21:38.323 Flush: Supported 00:21:38.323 Reservation: Not Supported 00:21:38.323 Namespace Sharing Capabilities: Multiple Controllers 00:21:38.323 Size (in LBAs): 1310720 (5GiB) 00:21:38.323 Capacity (in LBAs): 1310720 (5GiB) 00:21:38.323 Utilization (in LBAs): 1310720 (5GiB) 00:21:38.323 UUID: 0505161d-65b2-4e4f-823e-e7b2fe9df832 00:21:38.323 Thin Provisioning: Not Supported 00:21:38.323 Per-NS Atomic Units: Yes 00:21:38.323 Atomic Boundary Size (Normal): 0 00:21:38.323 Atomic Boundary Size (PFail): 0 00:21:38.323 Atomic Boundary Offset: 0 00:21:38.323 NGUID/EUI64 Never Reused: No 00:21:38.323 ANA group ID: 1 00:21:38.323 Namespace Write Protected: No 00:21:38.323 Number of LBA Formats: 1 00:21:38.323 Current LBA Format: LBA Format #00 00:21:38.323 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:21:38.323 00:21:38.323 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:21:38.323 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:38.323 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:21:38.323 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:38.323 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:21:38.323 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:38.323 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:38.323 rmmod nvme_tcp 00:21:38.323 rmmod nvme_fabrics 00:21:38.323 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:38.323 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:21:38.323 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:21:38.323 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:21:38.323 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:38.323 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:38.323 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:38.323 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:21:38.323 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:21:38.323 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:38.323 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:21:38.323 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:38.323 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:38.323 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:38.323 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:38.323 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:38.582 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:38.582 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:38.582 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:38.582 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:38.582 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:38.582 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:38.582 11:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:38.582 11:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:38.582 11:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:38.582 11:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:38.582 11:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:38.582 11:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:38.582 11:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:38.582 11:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:38.582 11:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:21:38.582 11:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:21:38.582 11:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:21:38.582 11:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:21:38.582 11:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:38.582 11:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:38.582 11:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:21:38.582 11:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:38.582 11:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:21:38.582 11:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:21:38.582 11:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:39.150 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:39.410 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:39.410 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:39.410 00:21:39.410 real 0m3.041s 00:21:39.410 user 0m1.086s 00:21:39.410 sys 0m1.341s 00:21:39.410 ************************************ 00:21:39.410 END TEST nvmf_identify_kernel_target 00:21:39.410 ************************************ 00:21:39.410 11:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:39.410 11:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.410 11:34:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:21:39.410 11:34:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:39.410 11:34:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:39.410 11:34:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:39.410 ************************************ 00:21:39.410 START TEST nvmf_auth_host 00:21:39.410 ************************************ 00:21:39.410 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:21:39.670 * Looking for test storage... 00:21:39.670 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:39.670 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:39.670 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:21:39.670 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:39.670 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:39.670 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:39.670 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:39.670 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:39.670 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:21:39.670 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:21:39.670 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:21:39.670 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:21:39.670 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:21:39.670 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:21:39.670 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:21:39.670 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:39.670 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:21:39.670 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:21:39.670 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:39.670 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:39.670 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:21:39.670 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:21:39.670 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:39.670 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:21:39.670 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:21:39.670 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:21:39.670 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:21:39.670 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:39.670 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:21:39.670 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:21:39.670 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:39.670 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:39.670 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:21:39.670 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:39.670 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:39.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:39.670 --rc genhtml_branch_coverage=1 00:21:39.670 --rc genhtml_function_coverage=1 00:21:39.670 --rc genhtml_legend=1 00:21:39.670 --rc geninfo_all_blocks=1 00:21:39.670 --rc geninfo_unexecuted_blocks=1 00:21:39.670 00:21:39.670 ' 00:21:39.670 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:39.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:39.670 --rc genhtml_branch_coverage=1 00:21:39.670 --rc genhtml_function_coverage=1 00:21:39.670 --rc genhtml_legend=1 00:21:39.670 --rc geninfo_all_blocks=1 00:21:39.670 --rc geninfo_unexecuted_blocks=1 00:21:39.670 00:21:39.670 ' 00:21:39.670 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:39.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:39.670 --rc genhtml_branch_coverage=1 00:21:39.670 --rc genhtml_function_coverage=1 00:21:39.670 --rc genhtml_legend=1 00:21:39.670 --rc geninfo_all_blocks=1 00:21:39.670 --rc geninfo_unexecuted_blocks=1 00:21:39.670 00:21:39.670 ' 00:21:39.670 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:39.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:39.670 --rc genhtml_branch_coverage=1 00:21:39.670 --rc genhtml_function_coverage=1 00:21:39.670 --rc genhtml_legend=1 00:21:39.670 --rc geninfo_all_blocks=1 00:21:39.670 --rc geninfo_unexecuted_blocks=1 00:21:39.670 00:21:39.670 ' 00:21:39.670 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:39.670 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:21:39.670 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:39.670 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:39.670 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:39.670 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:39.670 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:39.670 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:39.670 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:39.670 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:39.670 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:39.670 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:39.671 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:21:39.671 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=806d442c-08ba-4719-b76d-33169283459a 00:21:39.671 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:39.671 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:39.671 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:39.671 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:39.671 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:39.671 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:39.671 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:39.671 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:39.671 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:39.671 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.671 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.671 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.671 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:21:39.671 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.671 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:21:39.671 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:39.671 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:39.671 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:39.671 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:39.671 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:39.671 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:39.671 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:39.671 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:39.671 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:39.671 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:39.671 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:21:39.671 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:21:39.671 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:21:39.671 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:21:39.671 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:21:39.671 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:21:39.671 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:21:39.671 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:21:39.671 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:21:39.671 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:39.671 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:39.671 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:39.671 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:39.671 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:39.671 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:39.671 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:39.671 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:39.671 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:39.671 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:39.671 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:39.671 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:39.671 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:39.671 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:39.671 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:39.671 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:39.671 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:39.671 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:39.671 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:39.671 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:39.671 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:39.671 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:39.671 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:39.671 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:39.671 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:39.671 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:39.671 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:39.671 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:39.671 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:39.671 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:39.671 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:39.671 Cannot find device "nvmf_init_br" 00:21:39.671 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:21:39.671 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:39.671 Cannot find device "nvmf_init_br2" 00:21:39.671 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:21:39.671 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:39.671 Cannot find device "nvmf_tgt_br" 00:21:39.671 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:21:39.671 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:39.671 Cannot find device "nvmf_tgt_br2" 00:21:39.671 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:21:39.671 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:39.671 Cannot find device "nvmf_init_br" 00:21:39.671 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:21:39.672 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:39.930 Cannot find device "nvmf_init_br2" 00:21:39.930 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:21:39.930 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:39.930 Cannot find device "nvmf_tgt_br" 00:21:39.930 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:21:39.930 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:39.930 Cannot find device "nvmf_tgt_br2" 00:21:39.930 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:21:39.930 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:39.930 Cannot find device "nvmf_br" 00:21:39.930 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:21:39.930 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:39.930 Cannot find device "nvmf_init_if" 00:21:39.930 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:21:39.930 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:39.930 Cannot find device "nvmf_init_if2" 00:21:39.930 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:21:39.930 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:39.930 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:39.930 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:21:39.930 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:39.930 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:39.930 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:21:39.930 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:39.930 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:39.930 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:39.930 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:39.930 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:39.930 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:39.930 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:39.930 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:39.930 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:39.930 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:39.930 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:39.930 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:39.930 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:39.930 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:39.930 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:39.930 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:39.930 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:39.930 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:39.930 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:39.931 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:39.931 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:39.931 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:39.931 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:40.189 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:40.189 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:40.189 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:40.189 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:40.189 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:40.189 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:40.189 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:40.189 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:40.189 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:40.189 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:40.189 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:40.189 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.090 ms 00:21:40.189 00:21:40.189 --- 10.0.0.3 ping statistics --- 00:21:40.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:40.189 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:21:40.189 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:40.189 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:40.189 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 00:21:40.189 00:21:40.189 --- 10.0.0.4 ping statistics --- 00:21:40.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:40.189 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:21:40.189 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:40.190 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:40.190 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:21:40.190 00:21:40.190 --- 10.0.0.1 ping statistics --- 00:21:40.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:40.190 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:21:40.190 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:40.190 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:40.190 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.100 ms 00:21:40.190 00:21:40.190 --- 10.0.0.2 ping statistics --- 00:21:40.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:40.190 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:21:40.190 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:40.190 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # return 0 00:21:40.190 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:40.190 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:40.190 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:40.190 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:40.190 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:40.190 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:40.190 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:40.190 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:21:40.190 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:40.190 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:40.190 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:40.190 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=92027 00:21:40.190 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:21:40.190 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 92027 00:21:40.190 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 92027 ']' 00:21:40.190 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:40.190 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:40.190 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:40.190 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:40.190 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:40.449 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:40.449 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:21:40.449 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:40.449 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:40.449 11:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:40.449 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:40.449 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:21:40.449 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:21:40.449 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:21:40.449 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:40.449 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:21:40.449 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:21:40.449 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:21:40.449 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:40.449 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b89882740e673d4593472663f59db8d7 00:21:40.449 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:21:40.449 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.H7h 00:21:40.449 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b89882740e673d4593472663f59db8d7 0 00:21:40.449 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b89882740e673d4593472663f59db8d7 0 00:21:40.449 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:21:40.449 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:40.449 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b89882740e673d4593472663f59db8d7 00:21:40.449 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:21:40.449 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:21:40.708 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.H7h 00:21:40.708 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.H7h 00:21:40.708 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.H7h 00:21:40.708 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:21:40.708 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:21:40.708 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:40.709 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:21:40.709 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:21:40.709 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:21:40.709 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:21:40.709 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b36f68101ab683363987844782f655fb8fc07195b31d839e888a517d48b81661 00:21:40.709 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:21:40.709 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.RBB 00:21:40.709 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b36f68101ab683363987844782f655fb8fc07195b31d839e888a517d48b81661 3 00:21:40.709 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b36f68101ab683363987844782f655fb8fc07195b31d839e888a517d48b81661 3 00:21:40.709 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:21:40.709 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:40.709 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b36f68101ab683363987844782f655fb8fc07195b31d839e888a517d48b81661 00:21:40.709 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:21:40.709 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:21:40.709 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.RBB 00:21:40.709 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.RBB 00:21:40.709 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.RBB 00:21:40.709 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:21:40.709 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:21:40.709 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:40.709 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:21:40.709 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:21:40.709 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:21:40.709 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:40.709 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=bc407a5873a4b2f44ae05faa31f39e8e0f297af358aac325 00:21:40.709 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:21:40.709 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.j5k 00:21:40.709 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key bc407a5873a4b2f44ae05faa31f39e8e0f297af358aac325 0 00:21:40.709 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 bc407a5873a4b2f44ae05faa31f39e8e0f297af358aac325 0 00:21:40.709 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:21:40.709 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:40.709 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=bc407a5873a4b2f44ae05faa31f39e8e0f297af358aac325 00:21:40.709 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:21:40.709 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:21:40.709 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.j5k 00:21:40.709 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.j5k 00:21:40.709 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.j5k 00:21:40.709 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:21:40.709 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:21:40.709 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:40.709 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:21:40.709 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:21:40.709 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:21:40.709 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:40.709 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=79a0308067f27a46a87bad3ef27208944888597df81be93f 00:21:40.709 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:21:40.709 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.MTj 00:21:40.709 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 79a0308067f27a46a87bad3ef27208944888597df81be93f 2 00:21:40.709 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 79a0308067f27a46a87bad3ef27208944888597df81be93f 2 00:21:40.709 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:21:40.709 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:40.709 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=79a0308067f27a46a87bad3ef27208944888597df81be93f 00:21:40.709 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:21:40.709 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:21:40.709 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.MTj 00:21:40.709 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.MTj 00:21:40.709 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.MTj 00:21:40.709 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:21:40.709 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:21:40.709 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:40.709 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:21:40.709 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:21:40.709 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:21:40.709 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:40.709 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=07fb6e4eda23be3e0f0e0130d4f348b1 00:21:40.709 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:21:40.709 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.W89 00:21:40.709 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 07fb6e4eda23be3e0f0e0130d4f348b1 1 00:21:40.709 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 07fb6e4eda23be3e0f0e0130d4f348b1 1 00:21:40.709 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:21:40.709 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:40.709 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=07fb6e4eda23be3e0f0e0130d4f348b1 00:21:40.709 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:21:40.709 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:21:40.967 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.W89 00:21:40.967 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.W89 00:21:40.967 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.W89 00:21:40.967 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:21:40.967 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:21:40.967 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:40.967 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:21:40.967 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:21:40.967 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:21:40.967 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:40.967 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=2c945c4eca23a7215083bf224c6417ba 00:21:40.967 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:21:40.967 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.2sS 00:21:40.967 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 2c945c4eca23a7215083bf224c6417ba 1 00:21:40.967 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 2c945c4eca23a7215083bf224c6417ba 1 00:21:40.967 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:21:40.967 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:40.967 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=2c945c4eca23a7215083bf224c6417ba 00:21:40.967 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:21:40.968 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:21:40.968 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.2sS 00:21:40.968 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.2sS 00:21:40.968 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.2sS 00:21:40.968 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:21:40.968 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:21:40.968 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:40.968 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:21:40.968 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:21:40.968 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:21:40.968 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:40.968 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0b50ce7ebefa3b818313e9f4c01cf89f54e827c2d1ef9aa4 00:21:40.968 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:21:40.968 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.hS4 00:21:40.968 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0b50ce7ebefa3b818313e9f4c01cf89f54e827c2d1ef9aa4 2 00:21:40.968 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0b50ce7ebefa3b818313e9f4c01cf89f54e827c2d1ef9aa4 2 00:21:40.968 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:21:40.968 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:40.968 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0b50ce7ebefa3b818313e9f4c01cf89f54e827c2d1ef9aa4 00:21:40.968 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:21:40.968 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:21:40.968 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.hS4 00:21:40.968 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.hS4 00:21:40.968 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.hS4 00:21:40.968 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:21:40.968 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:21:40.968 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:40.968 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:21:40.968 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:21:40.968 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:21:40.968 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:40.968 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b5ee73339d9940b7795d223261f3586d 00:21:40.968 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:21:40.968 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.6DQ 00:21:40.968 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b5ee73339d9940b7795d223261f3586d 0 00:21:40.968 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b5ee73339d9940b7795d223261f3586d 0 00:21:40.968 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:21:40.968 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:40.968 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b5ee73339d9940b7795d223261f3586d 00:21:40.968 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:21:40.968 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:21:40.968 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.6DQ 00:21:40.968 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.6DQ 00:21:40.968 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.6DQ 00:21:40.968 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:21:40.968 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:21:40.968 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:40.968 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:21:40.968 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:21:40.968 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:21:40.968 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:21:40.968 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=4192cb651428cabbc30e253fd221b55d5894f6eb4559946d0eef84862fd095eb 00:21:40.968 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:21:40.968 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.yKX 00:21:40.968 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 4192cb651428cabbc30e253fd221b55d5894f6eb4559946d0eef84862fd095eb 3 00:21:40.968 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 4192cb651428cabbc30e253fd221b55d5894f6eb4559946d0eef84862fd095eb 3 00:21:40.968 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:21:40.968 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:40.968 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=4192cb651428cabbc30e253fd221b55d5894f6eb4559946d0eef84862fd095eb 00:21:40.968 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:21:40.968 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:21:41.227 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.yKX 00:21:41.227 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.yKX 00:21:41.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:41.227 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.yKX 00:21:41.227 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:21:41.227 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 92027 00:21:41.227 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 92027 ']' 00:21:41.227 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:41.227 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:41.227 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:41.227 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:41.227 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:41.486 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:41.486 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:21:41.486 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:21:41.486 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.H7h 00:21:41.486 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.486 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:41.486 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.486 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.RBB ]] 00:21:41.486 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.RBB 00:21:41.486 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.487 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:41.487 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.487 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:21:41.487 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.j5k 00:21:41.487 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.487 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:41.487 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.487 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.MTj ]] 00:21:41.487 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.MTj 00:21:41.487 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.487 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:41.487 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.487 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:21:41.487 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.W89 00:21:41.487 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.487 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:41.487 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.487 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.2sS ]] 00:21:41.487 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.2sS 00:21:41.487 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.487 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:41.487 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.487 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:21:41.487 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.hS4 00:21:41.487 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.487 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:41.487 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.487 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.6DQ ]] 00:21:41.487 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.6DQ 00:21:41.487 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.487 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:41.487 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.487 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:21:41.487 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.yKX 00:21:41.487 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.487 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:41.487 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.487 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:21:41.487 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:21:41.487 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:21:41.487 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:41.487 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:41.487 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:41.487 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:41.487 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:41.487 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:41.487 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:41.487 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:41.487 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:41.487 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:41.487 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:21:41.487 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:21:41.487 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:21:41.487 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:21:41.487 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:21:41.487 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:21:41.487 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:21:41.487 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:21:41.487 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:21:41.487 11:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:21:41.487 11:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:41.746 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:41.746 Waiting for block devices as requested 00:21:42.004 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:42.004 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:42.569 11:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:42.569 11:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:21:42.569 11:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:21:42.569 11:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:21:42.569 11:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:21:42.569 11:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:42.569 11:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:21:42.569 11:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:21:42.569 11:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:21:42.569 No valid GPT data, bailing 00:21:42.569 11:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:21:42.569 11:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:21:42.569 11:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:21:42.569 11:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:21:42.569 11:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:42.569 11:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:21:42.569 11:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:21:42.569 11:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:21:42.570 11:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:21:42.570 11:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:42.570 11:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:21:42.570 11:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:21:42.570 11:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:21:42.570 No valid GPT data, bailing 00:21:42.570 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:21:42.570 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:21:42.570 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:21:42.570 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:21:42.570 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:42.570 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:21:42.570 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:21:42.570 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:21:42.570 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:21:42.570 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:42.570 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:21:42.570 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:21:42.570 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:21:42.570 No valid GPT data, bailing 00:21:42.570 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:21:42.570 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:21:42.570 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:21:42.570 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:21:42.570 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:42.570 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:21:42.570 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:21:42.570 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:21:42.570 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:21:42.570 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:42.570 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:21:42.570 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:21:42.570 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:21:42.828 No valid GPT data, bailing 00:21:42.828 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:21:42.828 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:21:42.828 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:21:42.828 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:21:42.828 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:21:42.828 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:21:42.828 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:21:42.828 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:21:42.828 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:21:42.828 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:21:42.828 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:21:42.828 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:21:42.828 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:21:42.828 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:21:42.828 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:21:42.828 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:21:42.828 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:21:42.828 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid=806d442c-08ba-4719-b76d-33169283459a -a 10.0.0.1 -t tcp -s 4420 00:21:42.828 00:21:42.828 Discovery Log Number of Records 2, Generation counter 2 00:21:42.828 =====Discovery Log Entry 0====== 00:21:42.828 trtype: tcp 00:21:42.828 adrfam: ipv4 00:21:42.828 subtype: current discovery subsystem 00:21:42.828 treq: not specified, sq flow control disable supported 00:21:42.828 portid: 1 00:21:42.828 trsvcid: 4420 00:21:42.828 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:21:42.828 traddr: 10.0.0.1 00:21:42.828 eflags: none 00:21:42.828 sectype: none 00:21:42.828 =====Discovery Log Entry 1====== 00:21:42.828 trtype: tcp 00:21:42.828 adrfam: ipv4 00:21:42.828 subtype: nvme subsystem 00:21:42.828 treq: not specified, sq flow control disable supported 00:21:42.828 portid: 1 00:21:42.828 trsvcid: 4420 00:21:42.828 subnqn: nqn.2024-02.io.spdk:cnode0 00:21:42.828 traddr: 10.0.0.1 00:21:42.828 eflags: none 00:21:42.828 sectype: none 00:21:42.828 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:21:42.829 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:21:42.829 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:21:42.829 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:21:42.829 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:42.829 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:42.829 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:42.829 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:42.829 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmM0MDdhNTg3M2E0YjJmNDRhZTA1ZmFhMzFmMzllOGUwZjI5N2FmMzU4YWFjMzI1e9mfRQ==: 00:21:42.829 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzlhMDMwODA2N2YyN2E0NmE4N2JhZDNlZjI3MjA4OTQ0ODg4NTk3ZGY4MWJlOTNmW0IdTg==: 00:21:42.829 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:42.829 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:42.829 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmM0MDdhNTg3M2E0YjJmNDRhZTA1ZmFhMzFmMzllOGUwZjI5N2FmMzU4YWFjMzI1e9mfRQ==: 00:21:42.829 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzlhMDMwODA2N2YyN2E0NmE4N2JhZDNlZjI3MjA4OTQ0ODg4NTk3ZGY4MWJlOTNmW0IdTg==: ]] 00:21:42.829 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzlhMDMwODA2N2YyN2E0NmE4N2JhZDNlZjI3MjA4OTQ0ODg4NTk3ZGY4MWJlOTNmW0IdTg==: 00:21:42.829 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:21:42.829 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:21:42.829 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:21:42.829 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:42.829 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:21:42.829 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:42.829 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:21:42.829 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:42.829 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:42.829 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:42.829 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:42.829 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.829 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:42.829 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.088 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:43.088 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:43.088 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:43.088 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:43.088 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:43.088 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:43.088 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:43.088 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:43.088 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:43.088 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:43.088 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:43.088 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.088 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.088 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:43.088 nvme0n1 00:21:43.088 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.088 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:43.088 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.088 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:43.088 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:43.088 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.088 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.088 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:43.088 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.088 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:43.088 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.088 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:21:43.088 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:43.088 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:43.088 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:21:43.088 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:43.088 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:43.088 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:43.088 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:43.088 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjg5ODgyNzQwZTY3M2Q0NTkzNDcyNjYzZjU5ZGI4ZDeC/LYh: 00:21:43.089 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjM2ZjY4MTAxYWI2ODMzNjM5ODc4NDQ3ODJmNjU1ZmI4ZmMwNzE5NWIzMWQ4MzllODg4YTUxN2Q0OGI4MTY2MfjOZbc=: 00:21:43.089 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:43.089 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:43.089 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjg5ODgyNzQwZTY3M2Q0NTkzNDcyNjYzZjU5ZGI4ZDeC/LYh: 00:21:43.089 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjM2ZjY4MTAxYWI2ODMzNjM5ODc4NDQ3ODJmNjU1ZmI4ZmMwNzE5NWIzMWQ4MzllODg4YTUxN2Q0OGI4MTY2MfjOZbc=: ]] 00:21:43.089 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjM2ZjY4MTAxYWI2ODMzNjM5ODc4NDQ3ODJmNjU1ZmI4ZmMwNzE5NWIzMWQ4MzllODg4YTUxN2Q0OGI4MTY2MfjOZbc=: 00:21:43.089 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:21:43.089 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:43.089 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:43.089 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:43.089 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:43.089 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:43.089 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:43.089 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.089 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:43.089 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.089 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:43.089 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:43.089 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:43.089 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:43.089 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:43.089 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:43.089 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:43.089 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:43.089 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:43.089 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:43.089 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:43.089 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:43.089 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.089 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:43.348 nvme0n1 00:21:43.348 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.348 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:43.348 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:43.348 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.348 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:43.348 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.348 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.348 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:43.348 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.348 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:43.348 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.348 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:43.348 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:21:43.348 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:43.348 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:43.348 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:43.348 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:43.348 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmM0MDdhNTg3M2E0YjJmNDRhZTA1ZmFhMzFmMzllOGUwZjI5N2FmMzU4YWFjMzI1e9mfRQ==: 00:21:43.348 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzlhMDMwODA2N2YyN2E0NmE4N2JhZDNlZjI3MjA4OTQ0ODg4NTk3ZGY4MWJlOTNmW0IdTg==: 00:21:43.348 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:43.348 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:43.348 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmM0MDdhNTg3M2E0YjJmNDRhZTA1ZmFhMzFmMzllOGUwZjI5N2FmMzU4YWFjMzI1e9mfRQ==: 00:21:43.348 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzlhMDMwODA2N2YyN2E0NmE4N2JhZDNlZjI3MjA4OTQ0ODg4NTk3ZGY4MWJlOTNmW0IdTg==: ]] 00:21:43.348 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzlhMDMwODA2N2YyN2E0NmE4N2JhZDNlZjI3MjA4OTQ0ODg4NTk3ZGY4MWJlOTNmW0IdTg==: 00:21:43.348 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:21:43.348 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:43.348 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:43.348 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:43.348 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:43.348 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:43.348 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:43.348 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.348 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:43.348 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.348 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:43.348 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:43.348 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:43.348 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:43.348 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:43.348 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:43.348 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:43.348 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:43.348 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:43.348 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:43.348 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:43.348 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.348 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.348 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:43.348 nvme0n1 00:21:43.348 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.348 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:43.348 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:43.348 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.348 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:43.348 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.607 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.607 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:43.607 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.607 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:43.607 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.607 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:43.607 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:21:43.607 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:43.607 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:43.607 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:43.607 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:43.607 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDdmYjZlNGVkYTIzYmUzZTBmMGUwMTMwZDRmMzQ4YjE3mYm7: 00:21:43.608 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmM5NDVjNGVjYTIzYTcyMTUwODNiZjIyNGM2NDE3YmGHQr9X: 00:21:43.608 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:43.608 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:43.608 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDdmYjZlNGVkYTIzYmUzZTBmMGUwMTMwZDRmMzQ4YjE3mYm7: 00:21:43.608 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmM5NDVjNGVjYTIzYTcyMTUwODNiZjIyNGM2NDE3YmGHQr9X: ]] 00:21:43.608 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmM5NDVjNGVjYTIzYTcyMTUwODNiZjIyNGM2NDE3YmGHQr9X: 00:21:43.608 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:21:43.608 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:43.608 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:43.608 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:43.608 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:43.608 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:43.608 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:43.608 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.608 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:43.608 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.608 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:43.608 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:43.608 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:43.608 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:43.608 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:43.608 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:43.608 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:43.608 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:43.608 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:43.608 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:43.608 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:43.608 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:43.608 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.608 11:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:43.608 nvme0n1 00:21:43.608 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.608 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:43.608 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:43.608 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.608 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:43.608 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.608 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.608 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:43.608 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.608 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:43.608 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.608 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:43.608 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:21:43.608 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:43.608 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:43.608 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:43.608 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:43.608 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGI1MGNlN2ViZWZhM2I4MTgzMTNlOWY0YzAxY2Y4OWY1NGU4MjdjMmQxZWY5YWE0JcQyCw==: 00:21:43.608 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjVlZTczMzM5ZDk5NDBiNzc5NWQyMjMyNjFmMzU4NmR2PvMh: 00:21:43.608 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:43.608 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:43.608 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGI1MGNlN2ViZWZhM2I4MTgzMTNlOWY0YzAxY2Y4OWY1NGU4MjdjMmQxZWY5YWE0JcQyCw==: 00:21:43.608 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjVlZTczMzM5ZDk5NDBiNzc5NWQyMjMyNjFmMzU4NmR2PvMh: ]] 00:21:43.608 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjVlZTczMzM5ZDk5NDBiNzc5NWQyMjMyNjFmMzU4NmR2PvMh: 00:21:43.608 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:21:43.608 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:43.608 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:43.608 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:43.608 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:43.608 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:43.608 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:43.608 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.608 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:43.608 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.608 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:43.608 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:43.608 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:43.608 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:43.608 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:43.608 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:43.608 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:43.608 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:43.608 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:43.608 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:43.608 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:43.608 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:43.608 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.608 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:43.865 nvme0n1 00:21:43.865 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.865 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:43.865 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:43.865 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.865 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:43.865 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.865 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.866 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:43.866 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.866 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:43.866 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.866 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:43.866 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:21:43.866 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:43.866 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:43.866 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:43.866 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:43.866 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDE5MmNiNjUxNDI4Y2FiYmMzMGUyNTNmZDIyMWI1NWQ1ODk0ZjZlYjQ1NTk5NDZkMGVlZjg0ODYyZmQwOTVlYu6ZJZo=: 00:21:43.866 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:43.866 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:43.866 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:43.866 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDE5MmNiNjUxNDI4Y2FiYmMzMGUyNTNmZDIyMWI1NWQ1ODk0ZjZlYjQ1NTk5NDZkMGVlZjg0ODYyZmQwOTVlYu6ZJZo=: 00:21:43.866 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:43.866 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:21:43.866 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:43.866 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:43.866 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:43.866 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:43.866 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:43.866 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:43.866 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.866 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:43.866 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.866 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:43.866 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:43.866 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:43.866 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:43.866 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:43.866 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:43.866 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:43.866 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:43.866 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:43.866 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:43.866 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:43.866 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:43.866 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.866 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:43.866 nvme0n1 00:21:43.866 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.866 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:43.866 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:43.866 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.866 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:44.123 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.123 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.123 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:44.123 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.123 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:44.123 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.123 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:44.124 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:44.124 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:21:44.124 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:44.124 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:44.124 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:44.124 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:44.124 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjg5ODgyNzQwZTY3M2Q0NTkzNDcyNjYzZjU5ZGI4ZDeC/LYh: 00:21:44.124 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjM2ZjY4MTAxYWI2ODMzNjM5ODc4NDQ3ODJmNjU1ZmI4ZmMwNzE5NWIzMWQ4MzllODg4YTUxN2Q0OGI4MTY2MfjOZbc=: 00:21:44.124 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:44.124 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:44.382 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjg5ODgyNzQwZTY3M2Q0NTkzNDcyNjYzZjU5ZGI4ZDeC/LYh: 00:21:44.382 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjM2ZjY4MTAxYWI2ODMzNjM5ODc4NDQ3ODJmNjU1ZmI4ZmMwNzE5NWIzMWQ4MzllODg4YTUxN2Q0OGI4MTY2MfjOZbc=: ]] 00:21:44.382 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjM2ZjY4MTAxYWI2ODMzNjM5ODc4NDQ3ODJmNjU1ZmI4ZmMwNzE5NWIzMWQ4MzllODg4YTUxN2Q0OGI4MTY2MfjOZbc=: 00:21:44.382 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:21:44.382 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:44.382 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:44.382 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:44.382 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:44.382 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:44.382 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:44.382 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.382 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:44.383 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.383 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:44.383 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:44.383 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:44.383 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:44.383 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:44.383 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:44.383 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:44.383 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:44.383 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:44.383 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:44.383 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:44.383 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:44.383 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.383 11:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:44.692 nvme0n1 00:21:44.692 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.692 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:44.692 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.692 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:44.692 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:44.692 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.692 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.692 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:44.692 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.692 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:44.692 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.692 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:44.692 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:21:44.692 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:44.692 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:44.692 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:44.692 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:44.692 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmM0MDdhNTg3M2E0YjJmNDRhZTA1ZmFhMzFmMzllOGUwZjI5N2FmMzU4YWFjMzI1e9mfRQ==: 00:21:44.692 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzlhMDMwODA2N2YyN2E0NmE4N2JhZDNlZjI3MjA4OTQ0ODg4NTk3ZGY4MWJlOTNmW0IdTg==: 00:21:44.692 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:44.692 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:44.692 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmM0MDdhNTg3M2E0YjJmNDRhZTA1ZmFhMzFmMzllOGUwZjI5N2FmMzU4YWFjMzI1e9mfRQ==: 00:21:44.692 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzlhMDMwODA2N2YyN2E0NmE4N2JhZDNlZjI3MjA4OTQ0ODg4NTk3ZGY4MWJlOTNmW0IdTg==: ]] 00:21:44.692 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzlhMDMwODA2N2YyN2E0NmE4N2JhZDNlZjI3MjA4OTQ0ODg4NTk3ZGY4MWJlOTNmW0IdTg==: 00:21:44.692 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:21:44.692 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:44.692 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:44.692 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:44.692 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:44.692 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:44.692 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:44.692 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.692 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:44.692 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.692 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:44.692 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:44.692 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:44.692 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:44.692 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:44.692 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:44.692 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:44.692 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:44.692 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:44.692 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:44.692 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:44.692 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:44.692 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.692 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:44.692 nvme0n1 00:21:44.692 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.692 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:44.692 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.692 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:44.692 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:44.692 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.692 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.692 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:44.692 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.692 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDdmYjZlNGVkYTIzYmUzZTBmMGUwMTMwZDRmMzQ4YjE3mYm7: 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmM5NDVjNGVjYTIzYTcyMTUwODNiZjIyNGM2NDE3YmGHQr9X: 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDdmYjZlNGVkYTIzYmUzZTBmMGUwMTMwZDRmMzQ4YjE3mYm7: 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmM5NDVjNGVjYTIzYTcyMTUwODNiZjIyNGM2NDE3YmGHQr9X: ]] 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmM5NDVjNGVjYTIzYTcyMTUwODNiZjIyNGM2NDE3YmGHQr9X: 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:44.966 nvme0n1 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGI1MGNlN2ViZWZhM2I4MTgzMTNlOWY0YzAxY2Y4OWY1NGU4MjdjMmQxZWY5YWE0JcQyCw==: 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjVlZTczMzM5ZDk5NDBiNzc5NWQyMjMyNjFmMzU4NmR2PvMh: 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGI1MGNlN2ViZWZhM2I4MTgzMTNlOWY0YzAxY2Y4OWY1NGU4MjdjMmQxZWY5YWE0JcQyCw==: 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjVlZTczMzM5ZDk5NDBiNzc5NWQyMjMyNjFmMzU4NmR2PvMh: ]] 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjVlZTczMzM5ZDk5NDBiNzc5NWQyMjMyNjFmMzU4NmR2PvMh: 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.966 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:45.225 nvme0n1 00:21:45.225 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.225 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:45.225 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.225 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:45.225 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:45.225 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.225 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.225 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:45.225 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.225 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:45.225 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.225 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:45.225 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:21:45.225 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:45.225 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:45.225 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:45.225 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:45.225 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDE5MmNiNjUxNDI4Y2FiYmMzMGUyNTNmZDIyMWI1NWQ1ODk0ZjZlYjQ1NTk5NDZkMGVlZjg0ODYyZmQwOTVlYu6ZJZo=: 00:21:45.225 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:45.225 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:45.225 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:45.225 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDE5MmNiNjUxNDI4Y2FiYmMzMGUyNTNmZDIyMWI1NWQ1ODk0ZjZlYjQ1NTk5NDZkMGVlZjg0ODYyZmQwOTVlYu6ZJZo=: 00:21:45.225 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:45.225 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:21:45.225 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:45.225 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:45.225 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:45.225 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:45.225 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:45.225 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:45.225 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.225 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:45.225 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.225 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:45.225 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:45.225 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:45.225 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:45.226 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:45.226 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:45.226 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:45.226 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:45.226 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:45.226 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:45.226 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:45.226 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:45.226 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.226 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:45.484 nvme0n1 00:21:45.484 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.484 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:45.484 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:45.484 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.484 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:45.485 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.485 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.485 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:45.485 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.485 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:45.485 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.485 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:45.485 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:45.485 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:21:45.485 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:45.485 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:45.485 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:45.485 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:45.485 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjg5ODgyNzQwZTY3M2Q0NTkzNDcyNjYzZjU5ZGI4ZDeC/LYh: 00:21:45.485 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjM2ZjY4MTAxYWI2ODMzNjM5ODc4NDQ3ODJmNjU1ZmI4ZmMwNzE5NWIzMWQ4MzllODg4YTUxN2Q0OGI4MTY2MfjOZbc=: 00:21:45.485 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:45.485 11:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:46.052 11:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjg5ODgyNzQwZTY3M2Q0NTkzNDcyNjYzZjU5ZGI4ZDeC/LYh: 00:21:46.052 11:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjM2ZjY4MTAxYWI2ODMzNjM5ODc4NDQ3ODJmNjU1ZmI4ZmMwNzE5NWIzMWQ4MzllODg4YTUxN2Q0OGI4MTY2MfjOZbc=: ]] 00:21:46.052 11:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjM2ZjY4MTAxYWI2ODMzNjM5ODc4NDQ3ODJmNjU1ZmI4ZmMwNzE5NWIzMWQ4MzllODg4YTUxN2Q0OGI4MTY2MfjOZbc=: 00:21:46.052 11:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:21:46.052 11:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:46.052 11:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:46.052 11:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:46.052 11:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:46.052 11:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:46.052 11:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:46.052 11:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.052 11:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:46.052 11:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.052 11:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:46.052 11:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:46.052 11:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:46.052 11:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:46.052 11:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:46.052 11:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:46.052 11:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:46.052 11:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:46.052 11:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:46.052 11:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:46.052 11:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:46.052 11:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:46.052 11:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.052 11:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:46.310 nvme0n1 00:21:46.310 11:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.310 11:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:46.311 11:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:46.311 11:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.311 11:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:46.311 11:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.311 11:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:46.311 11:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:46.311 11:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.311 11:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:46.311 11:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.311 11:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:46.311 11:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:21:46.311 11:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:46.311 11:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:46.311 11:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:46.311 11:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:46.311 11:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmM0MDdhNTg3M2E0YjJmNDRhZTA1ZmFhMzFmMzllOGUwZjI5N2FmMzU4YWFjMzI1e9mfRQ==: 00:21:46.311 11:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzlhMDMwODA2N2YyN2E0NmE4N2JhZDNlZjI3MjA4OTQ0ODg4NTk3ZGY4MWJlOTNmW0IdTg==: 00:21:46.311 11:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:46.311 11:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:46.311 11:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmM0MDdhNTg3M2E0YjJmNDRhZTA1ZmFhMzFmMzllOGUwZjI5N2FmMzU4YWFjMzI1e9mfRQ==: 00:21:46.311 11:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzlhMDMwODA2N2YyN2E0NmE4N2JhZDNlZjI3MjA4OTQ0ODg4NTk3ZGY4MWJlOTNmW0IdTg==: ]] 00:21:46.311 11:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzlhMDMwODA2N2YyN2E0NmE4N2JhZDNlZjI3MjA4OTQ0ODg4NTk3ZGY4MWJlOTNmW0IdTg==: 00:21:46.311 11:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:21:46.311 11:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:46.311 11:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:46.311 11:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:46.311 11:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:46.311 11:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:46.311 11:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:46.311 11:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.311 11:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:46.570 11:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.570 11:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:46.570 11:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:46.570 11:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:46.570 11:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:46.570 11:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:46.570 11:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:46.570 11:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:46.570 11:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:46.570 11:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:46.570 11:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:46.570 11:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:46.570 11:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:46.570 11:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.570 11:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:46.570 nvme0n1 00:21:46.570 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.570 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:46.570 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:46.570 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.570 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:46.570 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.830 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:46.830 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:46.830 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.830 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:46.830 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.830 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:46.830 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:21:46.830 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:46.830 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:46.830 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:46.830 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:46.830 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDdmYjZlNGVkYTIzYmUzZTBmMGUwMTMwZDRmMzQ4YjE3mYm7: 00:21:46.830 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmM5NDVjNGVjYTIzYTcyMTUwODNiZjIyNGM2NDE3YmGHQr9X: 00:21:46.830 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:46.830 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:46.830 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDdmYjZlNGVkYTIzYmUzZTBmMGUwMTMwZDRmMzQ4YjE3mYm7: 00:21:46.830 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmM5NDVjNGVjYTIzYTcyMTUwODNiZjIyNGM2NDE3YmGHQr9X: ]] 00:21:46.830 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmM5NDVjNGVjYTIzYTcyMTUwODNiZjIyNGM2NDE3YmGHQr9X: 00:21:46.830 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:21:46.830 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:46.830 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:46.830 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:46.830 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:46.830 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:46.830 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:46.830 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.830 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:46.830 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.830 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:46.830 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:46.830 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:46.831 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:46.831 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:46.831 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:46.831 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:46.831 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:46.831 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:46.831 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:46.831 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:46.831 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:46.831 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.831 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:47.089 nvme0n1 00:21:47.089 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.089 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:47.089 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:47.089 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.089 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:47.089 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.089 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.089 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:47.089 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.089 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:47.089 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.089 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:47.089 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:21:47.089 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:47.089 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:47.089 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:47.089 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:47.089 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGI1MGNlN2ViZWZhM2I4MTgzMTNlOWY0YzAxY2Y4OWY1NGU4MjdjMmQxZWY5YWE0JcQyCw==: 00:21:47.089 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjVlZTczMzM5ZDk5NDBiNzc5NWQyMjMyNjFmMzU4NmR2PvMh: 00:21:47.089 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:47.089 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:47.089 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGI1MGNlN2ViZWZhM2I4MTgzMTNlOWY0YzAxY2Y4OWY1NGU4MjdjMmQxZWY5YWE0JcQyCw==: 00:21:47.089 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjVlZTczMzM5ZDk5NDBiNzc5NWQyMjMyNjFmMzU4NmR2PvMh: ]] 00:21:47.089 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjVlZTczMzM5ZDk5NDBiNzc5NWQyMjMyNjFmMzU4NmR2PvMh: 00:21:47.089 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:21:47.089 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:47.089 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:47.089 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:47.089 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:47.089 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:47.089 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:47.089 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.089 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:47.089 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.089 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:47.089 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:47.089 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:47.089 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:47.089 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:47.089 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:47.089 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:47.089 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:47.089 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:47.089 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:47.089 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:47.089 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:47.089 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.089 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:47.348 nvme0n1 00:21:47.348 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.348 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:47.348 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:47.348 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.348 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:47.348 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.348 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.348 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:47.348 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.348 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:47.348 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.348 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:47.348 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:21:47.348 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:47.348 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:47.348 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:47.348 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:47.348 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDE5MmNiNjUxNDI4Y2FiYmMzMGUyNTNmZDIyMWI1NWQ1ODk0ZjZlYjQ1NTk5NDZkMGVlZjg0ODYyZmQwOTVlYu6ZJZo=: 00:21:47.348 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:47.348 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:47.348 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:47.348 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDE5MmNiNjUxNDI4Y2FiYmMzMGUyNTNmZDIyMWI1NWQ1ODk0ZjZlYjQ1NTk5NDZkMGVlZjg0ODYyZmQwOTVlYu6ZJZo=: 00:21:47.348 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:47.348 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:21:47.348 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:47.348 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:47.348 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:47.348 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:47.348 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:47.348 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:47.348 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.348 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:47.348 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.348 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:47.348 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:47.348 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:47.348 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:47.348 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:47.348 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:47.348 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:47.348 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:47.348 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:47.348 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:47.348 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:47.348 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:47.348 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.348 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:47.607 nvme0n1 00:21:47.607 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.607 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:47.607 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:47.608 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.608 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:47.608 11:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.608 11:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.608 11:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:47.608 11:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.608 11:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:47.608 11:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.608 11:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:47.608 11:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:47.608 11:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:21:47.608 11:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:47.608 11:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:47.608 11:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:47.608 11:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:47.608 11:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjg5ODgyNzQwZTY3M2Q0NTkzNDcyNjYzZjU5ZGI4ZDeC/LYh: 00:21:47.608 11:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjM2ZjY4MTAxYWI2ODMzNjM5ODc4NDQ3ODJmNjU1ZmI4ZmMwNzE5NWIzMWQ4MzllODg4YTUxN2Q0OGI4MTY2MfjOZbc=: 00:21:47.608 11:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:47.608 11:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:49.509 11:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjg5ODgyNzQwZTY3M2Q0NTkzNDcyNjYzZjU5ZGI4ZDeC/LYh: 00:21:49.509 11:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjM2ZjY4MTAxYWI2ODMzNjM5ODc4NDQ3ODJmNjU1ZmI4ZmMwNzE5NWIzMWQ4MzllODg4YTUxN2Q0OGI4MTY2MfjOZbc=: ]] 00:21:49.509 11:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjM2ZjY4MTAxYWI2ODMzNjM5ODc4NDQ3ODJmNjU1ZmI4ZmMwNzE5NWIzMWQ4MzllODg4YTUxN2Q0OGI4MTY2MfjOZbc=: 00:21:49.509 11:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:21:49.509 11:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:49.510 11:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:49.510 11:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:49.510 11:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:49.510 11:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:49.510 11:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:49.510 11:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.510 11:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:49.510 11:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.510 11:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:49.510 11:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:49.510 11:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:49.510 11:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:49.510 11:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:49.510 11:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:49.510 11:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:49.510 11:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:49.510 11:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:49.510 11:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:49.510 11:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:49.510 11:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:49.510 11:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.510 11:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:49.768 nvme0n1 00:21:49.768 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.768 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:49.768 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:49.768 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.768 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:49.768 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.768 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.768 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:49.768 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.768 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:49.768 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.768 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:49.768 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:21:49.768 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:49.768 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:49.768 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:49.768 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:49.768 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmM0MDdhNTg3M2E0YjJmNDRhZTA1ZmFhMzFmMzllOGUwZjI5N2FmMzU4YWFjMzI1e9mfRQ==: 00:21:49.768 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzlhMDMwODA2N2YyN2E0NmE4N2JhZDNlZjI3MjA4OTQ0ODg4NTk3ZGY4MWJlOTNmW0IdTg==: 00:21:49.768 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:49.768 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:49.768 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmM0MDdhNTg3M2E0YjJmNDRhZTA1ZmFhMzFmMzllOGUwZjI5N2FmMzU4YWFjMzI1e9mfRQ==: 00:21:49.768 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzlhMDMwODA2N2YyN2E0NmE4N2JhZDNlZjI3MjA4OTQ0ODg4NTk3ZGY4MWJlOTNmW0IdTg==: ]] 00:21:49.768 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzlhMDMwODA2N2YyN2E0NmE4N2JhZDNlZjI3MjA4OTQ0ODg4NTk3ZGY4MWJlOTNmW0IdTg==: 00:21:49.768 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:21:49.768 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:49.768 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:49.768 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:49.768 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:49.768 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:49.768 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:49.768 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.769 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:49.769 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.769 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:49.769 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:49.769 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:49.769 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:49.769 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:49.769 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:50.027 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:50.027 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:50.027 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:50.027 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:50.027 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:50.027 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:50.027 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.027 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:50.319 nvme0n1 00:21:50.319 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.319 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:50.319 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:50.319 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.319 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:50.319 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.319 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:50.319 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:50.319 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.319 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:50.319 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.319 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:50.319 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:21:50.319 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:50.319 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:50.319 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:50.319 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:50.319 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDdmYjZlNGVkYTIzYmUzZTBmMGUwMTMwZDRmMzQ4YjE3mYm7: 00:21:50.319 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmM5NDVjNGVjYTIzYTcyMTUwODNiZjIyNGM2NDE3YmGHQr9X: 00:21:50.319 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:50.319 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:50.319 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDdmYjZlNGVkYTIzYmUzZTBmMGUwMTMwZDRmMzQ4YjE3mYm7: 00:21:50.319 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmM5NDVjNGVjYTIzYTcyMTUwODNiZjIyNGM2NDE3YmGHQr9X: ]] 00:21:50.319 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmM5NDVjNGVjYTIzYTcyMTUwODNiZjIyNGM2NDE3YmGHQr9X: 00:21:50.319 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:21:50.319 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:50.319 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:50.319 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:50.319 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:50.319 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:50.319 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:50.319 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.319 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:50.319 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.319 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:50.319 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:50.319 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:50.319 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:50.319 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:50.319 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:50.319 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:50.319 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:50.319 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:50.319 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:50.319 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:50.320 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:50.320 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.320 11:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:50.886 nvme0n1 00:21:50.887 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.887 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:50.887 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.887 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:50.887 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:50.887 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.887 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:50.887 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:50.887 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.887 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:50.887 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.887 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:50.887 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:21:50.887 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:50.887 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:50.887 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:50.887 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:50.887 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGI1MGNlN2ViZWZhM2I4MTgzMTNlOWY0YzAxY2Y4OWY1NGU4MjdjMmQxZWY5YWE0JcQyCw==: 00:21:50.887 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjVlZTczMzM5ZDk5NDBiNzc5NWQyMjMyNjFmMzU4NmR2PvMh: 00:21:50.887 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:50.887 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:50.887 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGI1MGNlN2ViZWZhM2I4MTgzMTNlOWY0YzAxY2Y4OWY1NGU4MjdjMmQxZWY5YWE0JcQyCw==: 00:21:50.887 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjVlZTczMzM5ZDk5NDBiNzc5NWQyMjMyNjFmMzU4NmR2PvMh: ]] 00:21:50.887 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjVlZTczMzM5ZDk5NDBiNzc5NWQyMjMyNjFmMzU4NmR2PvMh: 00:21:50.887 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:21:50.887 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:50.887 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:50.887 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:50.887 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:50.887 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:50.887 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:50.887 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.887 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:50.887 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.887 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:50.887 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:50.887 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:50.887 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:50.887 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:50.887 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:50.887 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:50.887 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:50.887 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:50.887 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:50.887 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:50.887 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:50.887 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.887 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.146 nvme0n1 00:21:51.146 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.146 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:51.146 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.146 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.146 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:51.146 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.146 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:51.146 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:51.146 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.146 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.146 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.146 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:51.146 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:21:51.146 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:51.146 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:51.146 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:51.146 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:51.146 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDE5MmNiNjUxNDI4Y2FiYmMzMGUyNTNmZDIyMWI1NWQ1ODk0ZjZlYjQ1NTk5NDZkMGVlZjg0ODYyZmQwOTVlYu6ZJZo=: 00:21:51.146 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:51.146 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:51.146 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:51.146 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDE5MmNiNjUxNDI4Y2FiYmMzMGUyNTNmZDIyMWI1NWQ1ODk0ZjZlYjQ1NTk5NDZkMGVlZjg0ODYyZmQwOTVlYu6ZJZo=: 00:21:51.146 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:51.146 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:21:51.146 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:51.146 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:51.146 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:51.146 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:51.146 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:51.146 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:51.146 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.146 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.405 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.405 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:51.405 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:51.405 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:51.405 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:51.405 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:51.405 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:51.405 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:51.405 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:51.405 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:51.405 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:51.405 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:51.405 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:51.405 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.405 11:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.664 nvme0n1 00:21:51.664 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.664 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:51.664 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.664 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.664 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:51.664 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.664 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:51.664 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:51.664 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.664 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.664 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.665 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:51.665 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:51.665 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:21:51.665 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:51.665 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:51.665 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:51.665 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:51.665 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjg5ODgyNzQwZTY3M2Q0NTkzNDcyNjYzZjU5ZGI4ZDeC/LYh: 00:21:51.665 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjM2ZjY4MTAxYWI2ODMzNjM5ODc4NDQ3ODJmNjU1ZmI4ZmMwNzE5NWIzMWQ4MzllODg4YTUxN2Q0OGI4MTY2MfjOZbc=: 00:21:51.665 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:51.665 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:51.665 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjg5ODgyNzQwZTY3M2Q0NTkzNDcyNjYzZjU5ZGI4ZDeC/LYh: 00:21:51.665 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjM2ZjY4MTAxYWI2ODMzNjM5ODc4NDQ3ODJmNjU1ZmI4ZmMwNzE5NWIzMWQ4MzllODg4YTUxN2Q0OGI4MTY2MfjOZbc=: ]] 00:21:51.665 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjM2ZjY4MTAxYWI2ODMzNjM5ODc4NDQ3ODJmNjU1ZmI4ZmMwNzE5NWIzMWQ4MzllODg4YTUxN2Q0OGI4MTY2MfjOZbc=: 00:21:51.665 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:21:51.665 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:51.665 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:51.665 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:51.665 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:51.665 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:51.665 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:51.665 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.665 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.665 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.665 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:51.665 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:51.665 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:51.665 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:51.665 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:51.665 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:51.665 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:51.665 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:51.665 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:51.665 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:51.665 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:51.665 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:51.665 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.665 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:52.600 nvme0n1 00:21:52.600 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.600 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:52.600 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.600 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:52.600 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:52.600 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.600 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.600 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:52.600 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.600 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:52.600 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.600 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:52.600 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:21:52.600 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:52.600 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:52.600 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:52.600 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:52.600 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmM0MDdhNTg3M2E0YjJmNDRhZTA1ZmFhMzFmMzllOGUwZjI5N2FmMzU4YWFjMzI1e9mfRQ==: 00:21:52.600 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzlhMDMwODA2N2YyN2E0NmE4N2JhZDNlZjI3MjA4OTQ0ODg4NTk3ZGY4MWJlOTNmW0IdTg==: 00:21:52.600 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:52.600 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:52.600 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmM0MDdhNTg3M2E0YjJmNDRhZTA1ZmFhMzFmMzllOGUwZjI5N2FmMzU4YWFjMzI1e9mfRQ==: 00:21:52.600 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzlhMDMwODA2N2YyN2E0NmE4N2JhZDNlZjI3MjA4OTQ0ODg4NTk3ZGY4MWJlOTNmW0IdTg==: ]] 00:21:52.600 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzlhMDMwODA2N2YyN2E0NmE4N2JhZDNlZjI3MjA4OTQ0ODg4NTk3ZGY4MWJlOTNmW0IdTg==: 00:21:52.600 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:21:52.600 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:52.600 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:52.600 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:52.600 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:52.600 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:52.600 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:52.600 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.600 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:52.600 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.600 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:52.600 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:52.600 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:52.600 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:52.600 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:52.600 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:52.600 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:52.600 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:52.600 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:52.600 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:52.600 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:52.600 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:52.600 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.600 11:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:53.166 nvme0n1 00:21:53.166 11:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.167 11:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:53.167 11:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.167 11:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:53.167 11:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:53.167 11:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.167 11:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:53.167 11:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:53.167 11:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.167 11:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:53.167 11:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.167 11:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:53.167 11:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:21:53.167 11:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:53.167 11:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:53.167 11:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:53.167 11:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:53.167 11:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDdmYjZlNGVkYTIzYmUzZTBmMGUwMTMwZDRmMzQ4YjE3mYm7: 00:21:53.167 11:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmM5NDVjNGVjYTIzYTcyMTUwODNiZjIyNGM2NDE3YmGHQr9X: 00:21:53.167 11:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:53.167 11:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:53.167 11:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDdmYjZlNGVkYTIzYmUzZTBmMGUwMTMwZDRmMzQ4YjE3mYm7: 00:21:53.167 11:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmM5NDVjNGVjYTIzYTcyMTUwODNiZjIyNGM2NDE3YmGHQr9X: ]] 00:21:53.167 11:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmM5NDVjNGVjYTIzYTcyMTUwODNiZjIyNGM2NDE3YmGHQr9X: 00:21:53.167 11:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:21:53.167 11:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:53.167 11:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:53.167 11:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:53.167 11:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:53.167 11:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:53.167 11:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:53.167 11:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.167 11:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:53.167 11:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.167 11:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:53.167 11:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:53.167 11:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:53.167 11:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:53.167 11:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:53.167 11:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:53.167 11:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:53.167 11:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:53.167 11:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:53.167 11:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:53.167 11:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:53.167 11:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:53.167 11:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.167 11:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:54.100 nvme0n1 00:21:54.100 11:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.101 11:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:54.101 11:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:54.101 11:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.101 11:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:54.101 11:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.101 11:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.101 11:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:54.101 11:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.101 11:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:54.101 11:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.101 11:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:54.101 11:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:21:54.101 11:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:54.101 11:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:54.101 11:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:54.101 11:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:54.101 11:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGI1MGNlN2ViZWZhM2I4MTgzMTNlOWY0YzAxY2Y4OWY1NGU4MjdjMmQxZWY5YWE0JcQyCw==: 00:21:54.101 11:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjVlZTczMzM5ZDk5NDBiNzc5NWQyMjMyNjFmMzU4NmR2PvMh: 00:21:54.101 11:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:54.101 11:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:54.101 11:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGI1MGNlN2ViZWZhM2I4MTgzMTNlOWY0YzAxY2Y4OWY1NGU4MjdjMmQxZWY5YWE0JcQyCw==: 00:21:54.101 11:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjVlZTczMzM5ZDk5NDBiNzc5NWQyMjMyNjFmMzU4NmR2PvMh: ]] 00:21:54.101 11:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjVlZTczMzM5ZDk5NDBiNzc5NWQyMjMyNjFmMzU4NmR2PvMh: 00:21:54.101 11:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:21:54.101 11:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:54.101 11:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:54.101 11:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:54.101 11:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:54.101 11:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:54.101 11:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:54.101 11:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.101 11:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:54.101 11:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.101 11:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:54.101 11:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:54.101 11:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:54.101 11:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:54.101 11:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:54.101 11:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:54.101 11:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:54.101 11:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:54.101 11:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:54.101 11:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:54.101 11:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:54.101 11:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:54.101 11:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.101 11:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:55.036 nvme0n1 00:21:55.036 11:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.036 11:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:55.036 11:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:55.036 11:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.036 11:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:55.036 11:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.036 11:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:55.036 11:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:55.036 11:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.036 11:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:55.036 11:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.036 11:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:55.036 11:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:21:55.036 11:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:55.036 11:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:55.036 11:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:55.036 11:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:55.036 11:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDE5MmNiNjUxNDI4Y2FiYmMzMGUyNTNmZDIyMWI1NWQ1ODk0ZjZlYjQ1NTk5NDZkMGVlZjg0ODYyZmQwOTVlYu6ZJZo=: 00:21:55.036 11:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:55.036 11:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:55.036 11:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:55.036 11:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDE5MmNiNjUxNDI4Y2FiYmMzMGUyNTNmZDIyMWI1NWQ1ODk0ZjZlYjQ1NTk5NDZkMGVlZjg0ODYyZmQwOTVlYu6ZJZo=: 00:21:55.036 11:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:55.036 11:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:21:55.036 11:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:55.036 11:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:55.036 11:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:55.036 11:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:55.036 11:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:55.036 11:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:55.036 11:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.036 11:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:55.036 11:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.036 11:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:55.036 11:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:55.036 11:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:55.036 11:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:55.036 11:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:55.036 11:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:55.036 11:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:55.036 11:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:55.036 11:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:55.036 11:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:55.036 11:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:55.036 11:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:55.036 11:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.036 11:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:55.601 nvme0n1 00:21:55.601 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.602 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:55.602 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:55.602 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.602 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:55.602 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.602 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:55.602 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:55.602 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.602 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:55.602 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.602 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:21:55.602 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:55.602 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:55.602 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:21:55.602 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:55.602 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:55.602 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:55.602 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:55.602 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjg5ODgyNzQwZTY3M2Q0NTkzNDcyNjYzZjU5ZGI4ZDeC/LYh: 00:21:55.602 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjM2ZjY4MTAxYWI2ODMzNjM5ODc4NDQ3ODJmNjU1ZmI4ZmMwNzE5NWIzMWQ4MzllODg4YTUxN2Q0OGI4MTY2MfjOZbc=: 00:21:55.602 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:55.602 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:55.602 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjg5ODgyNzQwZTY3M2Q0NTkzNDcyNjYzZjU5ZGI4ZDeC/LYh: 00:21:55.602 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjM2ZjY4MTAxYWI2ODMzNjM5ODc4NDQ3ODJmNjU1ZmI4ZmMwNzE5NWIzMWQ4MzllODg4YTUxN2Q0OGI4MTY2MfjOZbc=: ]] 00:21:55.602 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjM2ZjY4MTAxYWI2ODMzNjM5ODc4NDQ3ODJmNjU1ZmI4ZmMwNzE5NWIzMWQ4MzllODg4YTUxN2Q0OGI4MTY2MfjOZbc=: 00:21:55.602 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:21:55.602 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:55.602 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:55.602 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:55.602 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:55.602 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:55.602 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:55.602 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.602 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:55.602 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.602 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:55.602 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:55.602 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:55.602 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:55.602 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:55.602 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:55.602 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:55.602 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:55.602 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:55.602 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:55.602 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:55.602 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:55.602 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.602 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:55.860 nvme0n1 00:21:55.860 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.860 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:55.860 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:55.860 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.860 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:55.860 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.860 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:55.860 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:55.860 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.860 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:55.860 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.860 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:55.860 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:21:55.860 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:55.860 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:55.860 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:55.860 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:55.860 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmM0MDdhNTg3M2E0YjJmNDRhZTA1ZmFhMzFmMzllOGUwZjI5N2FmMzU4YWFjMzI1e9mfRQ==: 00:21:55.860 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzlhMDMwODA2N2YyN2E0NmE4N2JhZDNlZjI3MjA4OTQ0ODg4NTk3ZGY4MWJlOTNmW0IdTg==: 00:21:55.860 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:55.860 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:55.860 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmM0MDdhNTg3M2E0YjJmNDRhZTA1ZmFhMzFmMzllOGUwZjI5N2FmMzU4YWFjMzI1e9mfRQ==: 00:21:55.860 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzlhMDMwODA2N2YyN2E0NmE4N2JhZDNlZjI3MjA4OTQ0ODg4NTk3ZGY4MWJlOTNmW0IdTg==: ]] 00:21:55.860 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzlhMDMwODA2N2YyN2E0NmE4N2JhZDNlZjI3MjA4OTQ0ODg4NTk3ZGY4MWJlOTNmW0IdTg==: 00:21:55.860 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:21:55.861 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:55.861 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:55.861 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:55.861 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:55.861 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:55.861 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:55.861 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.861 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:55.861 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.861 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:55.861 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:55.861 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:55.861 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:55.861 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:55.861 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:55.861 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:55.861 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:55.861 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:55.861 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:55.861 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:55.861 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:55.861 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.861 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:55.861 nvme0n1 00:21:55.861 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.861 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:55.861 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:55.861 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.861 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:55.861 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDdmYjZlNGVkYTIzYmUzZTBmMGUwMTMwZDRmMzQ4YjE3mYm7: 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmM5NDVjNGVjYTIzYTcyMTUwODNiZjIyNGM2NDE3YmGHQr9X: 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDdmYjZlNGVkYTIzYmUzZTBmMGUwMTMwZDRmMzQ4YjE3mYm7: 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmM5NDVjNGVjYTIzYTcyMTUwODNiZjIyNGM2NDE3YmGHQr9X: ]] 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmM5NDVjNGVjYTIzYTcyMTUwODNiZjIyNGM2NDE3YmGHQr9X: 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:56.209 nvme0n1 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGI1MGNlN2ViZWZhM2I4MTgzMTNlOWY0YzAxY2Y4OWY1NGU4MjdjMmQxZWY5YWE0JcQyCw==: 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjVlZTczMzM5ZDk5NDBiNzc5NWQyMjMyNjFmMzU4NmR2PvMh: 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGI1MGNlN2ViZWZhM2I4MTgzMTNlOWY0YzAxY2Y4OWY1NGU4MjdjMmQxZWY5YWE0JcQyCw==: 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjVlZTczMzM5ZDk5NDBiNzc5NWQyMjMyNjFmMzU4NmR2PvMh: ]] 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjVlZTczMzM5ZDk5NDBiNzc5NWQyMjMyNjFmMzU4NmR2PvMh: 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:56.209 nvme0n1 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:56.209 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.467 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.467 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:56.467 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.467 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:56.467 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.467 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:56.467 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:21:56.467 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:56.467 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:56.467 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:56.467 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:56.467 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDE5MmNiNjUxNDI4Y2FiYmMzMGUyNTNmZDIyMWI1NWQ1ODk0ZjZlYjQ1NTk5NDZkMGVlZjg0ODYyZmQwOTVlYu6ZJZo=: 00:21:56.467 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:56.467 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:56.467 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:56.467 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDE5MmNiNjUxNDI4Y2FiYmMzMGUyNTNmZDIyMWI1NWQ1ODk0ZjZlYjQ1NTk5NDZkMGVlZjg0ODYyZmQwOTVlYu6ZJZo=: 00:21:56.467 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:56.467 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:21:56.467 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:56.467 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:56.467 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:56.467 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:56.467 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:56.467 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:56.467 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.467 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:56.467 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.467 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:56.467 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:56.467 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:56.467 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:56.467 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:56.467 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:56.467 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:56.467 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:56.467 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:56.467 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:56.467 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:56.467 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:56.467 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.467 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:56.467 nvme0n1 00:21:56.467 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.467 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:56.467 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:56.467 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.467 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:56.467 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.467 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.467 11:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:56.467 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.468 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:56.468 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.468 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:56.468 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:56.468 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:21:56.468 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:56.468 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:56.468 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:56.468 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:56.468 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjg5ODgyNzQwZTY3M2Q0NTkzNDcyNjYzZjU5ZGI4ZDeC/LYh: 00:21:56.468 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjM2ZjY4MTAxYWI2ODMzNjM5ODc4NDQ3ODJmNjU1ZmI4ZmMwNzE5NWIzMWQ4MzllODg4YTUxN2Q0OGI4MTY2MfjOZbc=: 00:21:56.468 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:56.468 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:56.468 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjg5ODgyNzQwZTY3M2Q0NTkzNDcyNjYzZjU5ZGI4ZDeC/LYh: 00:21:56.468 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjM2ZjY4MTAxYWI2ODMzNjM5ODc4NDQ3ODJmNjU1ZmI4ZmMwNzE5NWIzMWQ4MzllODg4YTUxN2Q0OGI4MTY2MfjOZbc=: ]] 00:21:56.468 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjM2ZjY4MTAxYWI2ODMzNjM5ODc4NDQ3ODJmNjU1ZmI4ZmMwNzE5NWIzMWQ4MzllODg4YTUxN2Q0OGI4MTY2MfjOZbc=: 00:21:56.468 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:21:56.468 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:56.468 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:56.468 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:56.468 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:56.468 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:56.468 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:56.468 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.468 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:56.468 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.468 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:56.468 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:56.468 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:56.468 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:56.468 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:56.468 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:56.468 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:56.468 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:56.468 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:56.468 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:56.468 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:56.468 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:56.468 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.468 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:56.727 nvme0n1 00:21:56.727 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.727 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:56.727 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.727 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:56.727 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:56.727 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.727 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.727 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:56.727 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.727 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:56.727 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.727 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:56.727 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:21:56.727 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:56.727 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:56.727 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:56.727 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:56.727 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmM0MDdhNTg3M2E0YjJmNDRhZTA1ZmFhMzFmMzllOGUwZjI5N2FmMzU4YWFjMzI1e9mfRQ==: 00:21:56.727 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzlhMDMwODA2N2YyN2E0NmE4N2JhZDNlZjI3MjA4OTQ0ODg4NTk3ZGY4MWJlOTNmW0IdTg==: 00:21:56.727 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:56.727 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:56.727 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmM0MDdhNTg3M2E0YjJmNDRhZTA1ZmFhMzFmMzllOGUwZjI5N2FmMzU4YWFjMzI1e9mfRQ==: 00:21:56.727 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzlhMDMwODA2N2YyN2E0NmE4N2JhZDNlZjI3MjA4OTQ0ODg4NTk3ZGY4MWJlOTNmW0IdTg==: ]] 00:21:56.727 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzlhMDMwODA2N2YyN2E0NmE4N2JhZDNlZjI3MjA4OTQ0ODg4NTk3ZGY4MWJlOTNmW0IdTg==: 00:21:56.727 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:21:56.727 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:56.727 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:56.727 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:56.727 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:56.727 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:56.727 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:56.727 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.727 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:56.727 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.727 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:56.727 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:56.727 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:56.727 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:56.727 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:56.727 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:56.727 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:56.727 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:56.727 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:56.727 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:56.727 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:56.727 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:56.727 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.727 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:56.986 nvme0n1 00:21:56.986 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.986 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:56.986 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.986 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:56.986 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:56.986 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.986 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.986 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:56.986 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.986 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:56.986 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.986 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:56.986 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:21:56.986 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:56.986 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:56.986 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:56.986 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:56.986 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDdmYjZlNGVkYTIzYmUzZTBmMGUwMTMwZDRmMzQ4YjE3mYm7: 00:21:56.986 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmM5NDVjNGVjYTIzYTcyMTUwODNiZjIyNGM2NDE3YmGHQr9X: 00:21:56.986 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:56.986 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:56.986 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDdmYjZlNGVkYTIzYmUzZTBmMGUwMTMwZDRmMzQ4YjE3mYm7: 00:21:56.986 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmM5NDVjNGVjYTIzYTcyMTUwODNiZjIyNGM2NDE3YmGHQr9X: ]] 00:21:56.986 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmM5NDVjNGVjYTIzYTcyMTUwODNiZjIyNGM2NDE3YmGHQr9X: 00:21:56.986 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:21:56.986 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:56.986 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:56.986 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:56.986 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:56.986 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:56.986 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:56.986 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.986 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:56.986 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.986 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:56.986 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:56.986 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:56.986 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:56.986 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:56.986 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:56.986 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:56.986 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:56.986 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:56.986 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:56.986 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:56.986 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:56.986 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.986 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:56.986 nvme0n1 00:21:56.986 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.986 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:56.986 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.986 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:56.986 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:57.245 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.245 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:57.245 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:57.245 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.245 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:57.245 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.245 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:57.245 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:21:57.245 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:57.245 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:57.245 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:57.245 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:57.245 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGI1MGNlN2ViZWZhM2I4MTgzMTNlOWY0YzAxY2Y4OWY1NGU4MjdjMmQxZWY5YWE0JcQyCw==: 00:21:57.245 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjVlZTczMzM5ZDk5NDBiNzc5NWQyMjMyNjFmMzU4NmR2PvMh: 00:21:57.245 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:57.245 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:57.245 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGI1MGNlN2ViZWZhM2I4MTgzMTNlOWY0YzAxY2Y4OWY1NGU4MjdjMmQxZWY5YWE0JcQyCw==: 00:21:57.245 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjVlZTczMzM5ZDk5NDBiNzc5NWQyMjMyNjFmMzU4NmR2PvMh: ]] 00:21:57.245 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjVlZTczMzM5ZDk5NDBiNzc5NWQyMjMyNjFmMzU4NmR2PvMh: 00:21:57.245 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:21:57.245 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:57.245 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:57.245 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:57.245 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:57.245 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:57.245 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:57.245 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.245 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:57.245 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.245 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:57.245 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:57.245 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:57.245 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:57.245 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:57.245 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:57.245 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:57.245 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:57.245 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:57.245 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:57.245 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:57.245 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:57.245 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.245 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:57.245 nvme0n1 00:21:57.245 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.245 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:57.245 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:57.245 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.245 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:57.245 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.245 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:57.245 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:57.245 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.245 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:57.245 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.245 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:57.245 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:21:57.245 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:57.245 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:57.245 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:57.245 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:57.245 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDE5MmNiNjUxNDI4Y2FiYmMzMGUyNTNmZDIyMWI1NWQ1ODk0ZjZlYjQ1NTk5NDZkMGVlZjg0ODYyZmQwOTVlYu6ZJZo=: 00:21:57.245 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:57.245 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:57.245 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:57.245 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDE5MmNiNjUxNDI4Y2FiYmMzMGUyNTNmZDIyMWI1NWQ1ODk0ZjZlYjQ1NTk5NDZkMGVlZjg0ODYyZmQwOTVlYu6ZJZo=: 00:21:57.245 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:57.246 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:21:57.246 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:57.246 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:57.246 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:57.246 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:57.246 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:57.246 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:57.246 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.246 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:57.504 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.504 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:57.504 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:57.504 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:57.504 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:57.504 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:57.504 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:57.504 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:57.504 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:57.504 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:57.504 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:57.504 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:57.504 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:57.504 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.504 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:57.504 nvme0n1 00:21:57.504 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.504 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:57.504 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:57.504 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.504 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:57.504 11:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.504 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:57.504 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:57.504 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.504 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:57.504 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.504 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:57.504 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:57.504 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:21:57.504 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:57.504 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:57.504 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:57.504 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:57.505 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjg5ODgyNzQwZTY3M2Q0NTkzNDcyNjYzZjU5ZGI4ZDeC/LYh: 00:21:57.505 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjM2ZjY4MTAxYWI2ODMzNjM5ODc4NDQ3ODJmNjU1ZmI4ZmMwNzE5NWIzMWQ4MzllODg4YTUxN2Q0OGI4MTY2MfjOZbc=: 00:21:57.505 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:57.505 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:57.505 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjg5ODgyNzQwZTY3M2Q0NTkzNDcyNjYzZjU5ZGI4ZDeC/LYh: 00:21:57.505 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjM2ZjY4MTAxYWI2ODMzNjM5ODc4NDQ3ODJmNjU1ZmI4ZmMwNzE5NWIzMWQ4MzllODg4YTUxN2Q0OGI4MTY2MfjOZbc=: ]] 00:21:57.505 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjM2ZjY4MTAxYWI2ODMzNjM5ODc4NDQ3ODJmNjU1ZmI4ZmMwNzE5NWIzMWQ4MzllODg4YTUxN2Q0OGI4MTY2MfjOZbc=: 00:21:57.505 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:21:57.505 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:57.505 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:57.505 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:57.505 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:57.505 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:57.505 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:57.505 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.505 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:57.505 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.505 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:57.505 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:57.505 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:57.505 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:57.505 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:57.505 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:57.505 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:57.505 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:57.505 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:57.505 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:57.505 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:57.505 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:57.505 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.505 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:57.763 nvme0n1 00:21:57.763 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.763 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:57.763 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.763 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:57.763 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:57.763 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.763 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:57.763 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:57.763 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.763 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:57.763 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.763 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:57.763 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:21:57.763 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:57.763 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:57.763 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:57.763 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:57.763 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmM0MDdhNTg3M2E0YjJmNDRhZTA1ZmFhMzFmMzllOGUwZjI5N2FmMzU4YWFjMzI1e9mfRQ==: 00:21:57.763 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzlhMDMwODA2N2YyN2E0NmE4N2JhZDNlZjI3MjA4OTQ0ODg4NTk3ZGY4MWJlOTNmW0IdTg==: 00:21:57.763 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:57.763 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:57.763 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmM0MDdhNTg3M2E0YjJmNDRhZTA1ZmFhMzFmMzllOGUwZjI5N2FmMzU4YWFjMzI1e9mfRQ==: 00:21:57.763 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzlhMDMwODA2N2YyN2E0NmE4N2JhZDNlZjI3MjA4OTQ0ODg4NTk3ZGY4MWJlOTNmW0IdTg==: ]] 00:21:57.763 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzlhMDMwODA2N2YyN2E0NmE4N2JhZDNlZjI3MjA4OTQ0ODg4NTk3ZGY4MWJlOTNmW0IdTg==: 00:21:57.763 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:21:57.763 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:57.763 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:57.763 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:57.763 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:57.763 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:57.763 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:57.763 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.763 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:57.763 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.763 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:57.763 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:57.763 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:57.763 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:57.763 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:57.763 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:57.763 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:57.763 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:57.763 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:57.763 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:57.763 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:57.763 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:57.763 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.763 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.021 nvme0n1 00:21:58.021 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.021 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:58.021 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.021 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.021 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:58.021 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.279 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.279 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:58.279 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.279 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.279 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.279 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:58.279 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:21:58.279 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:58.279 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:58.279 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:58.279 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:58.279 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDdmYjZlNGVkYTIzYmUzZTBmMGUwMTMwZDRmMzQ4YjE3mYm7: 00:21:58.279 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmM5NDVjNGVjYTIzYTcyMTUwODNiZjIyNGM2NDE3YmGHQr9X: 00:21:58.279 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:58.279 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:58.279 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDdmYjZlNGVkYTIzYmUzZTBmMGUwMTMwZDRmMzQ4YjE3mYm7: 00:21:58.279 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmM5NDVjNGVjYTIzYTcyMTUwODNiZjIyNGM2NDE3YmGHQr9X: ]] 00:21:58.279 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmM5NDVjNGVjYTIzYTcyMTUwODNiZjIyNGM2NDE3YmGHQr9X: 00:21:58.279 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:21:58.279 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:58.279 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:58.279 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:58.279 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:58.279 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:58.279 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:58.279 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.279 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.279 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.279 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:58.279 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:58.279 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:58.279 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:58.279 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:58.279 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:58.279 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:58.279 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:58.279 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:58.279 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:58.279 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:58.279 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:58.279 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.279 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.537 nvme0n1 00:21:58.537 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.537 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:58.537 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.537 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.537 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:58.537 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.537 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.537 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:58.537 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.537 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.537 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.537 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:58.537 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:21:58.537 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:58.537 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:58.537 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:58.537 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:58.537 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGI1MGNlN2ViZWZhM2I4MTgzMTNlOWY0YzAxY2Y4OWY1NGU4MjdjMmQxZWY5YWE0JcQyCw==: 00:21:58.537 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjVlZTczMzM5ZDk5NDBiNzc5NWQyMjMyNjFmMzU4NmR2PvMh: 00:21:58.537 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:58.537 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:58.537 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGI1MGNlN2ViZWZhM2I4MTgzMTNlOWY0YzAxY2Y4OWY1NGU4MjdjMmQxZWY5YWE0JcQyCw==: 00:21:58.537 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjVlZTczMzM5ZDk5NDBiNzc5NWQyMjMyNjFmMzU4NmR2PvMh: ]] 00:21:58.537 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjVlZTczMzM5ZDk5NDBiNzc5NWQyMjMyNjFmMzU4NmR2PvMh: 00:21:58.537 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:21:58.537 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:58.537 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:58.537 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:58.537 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:58.537 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:58.537 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:58.537 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.537 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.537 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.537 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:58.537 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:58.537 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:58.537 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:58.537 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:58.537 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:58.537 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:58.537 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:58.537 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:58.537 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:58.537 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:58.537 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:58.537 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.537 11:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.794 nvme0n1 00:21:58.794 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.794 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:58.794 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.794 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:58.794 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.794 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.794 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.794 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:58.794 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.795 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.795 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.795 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:58.795 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:21:58.795 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:58.795 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:58.795 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:58.795 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:58.795 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDE5MmNiNjUxNDI4Y2FiYmMzMGUyNTNmZDIyMWI1NWQ1ODk0ZjZlYjQ1NTk5NDZkMGVlZjg0ODYyZmQwOTVlYu6ZJZo=: 00:21:58.795 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:58.795 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:58.795 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:58.795 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDE5MmNiNjUxNDI4Y2FiYmMzMGUyNTNmZDIyMWI1NWQ1ODk0ZjZlYjQ1NTk5NDZkMGVlZjg0ODYyZmQwOTVlYu6ZJZo=: 00:21:58.795 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:58.795 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:21:58.795 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:58.795 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:58.795 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:58.795 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:58.795 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:58.795 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:58.795 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.795 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.795 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.795 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:58.795 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:58.795 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:58.795 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:58.795 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:58.795 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:58.795 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:58.795 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:58.795 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:58.795 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:58.795 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:58.795 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:58.795 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.795 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:59.054 nvme0n1 00:21:59.054 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.054 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:59.054 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:59.054 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.054 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:59.054 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.054 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.054 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:59.054 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.054 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:59.054 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.054 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:59.054 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:59.054 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:21:59.054 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:59.054 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:59.054 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:59.054 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:59.054 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjg5ODgyNzQwZTY3M2Q0NTkzNDcyNjYzZjU5ZGI4ZDeC/LYh: 00:21:59.054 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjM2ZjY4MTAxYWI2ODMzNjM5ODc4NDQ3ODJmNjU1ZmI4ZmMwNzE5NWIzMWQ4MzllODg4YTUxN2Q0OGI4MTY2MfjOZbc=: 00:21:59.054 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:59.054 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:59.054 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjg5ODgyNzQwZTY3M2Q0NTkzNDcyNjYzZjU5ZGI4ZDeC/LYh: 00:21:59.054 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjM2ZjY4MTAxYWI2ODMzNjM5ODc4NDQ3ODJmNjU1ZmI4ZmMwNzE5NWIzMWQ4MzllODg4YTUxN2Q0OGI4MTY2MfjOZbc=: ]] 00:21:59.054 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjM2ZjY4MTAxYWI2ODMzNjM5ODc4NDQ3ODJmNjU1ZmI4ZmMwNzE5NWIzMWQ4MzllODg4YTUxN2Q0OGI4MTY2MfjOZbc=: 00:21:59.054 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:21:59.054 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:59.054 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:59.054 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:59.054 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:59.055 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:59.055 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:59.055 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.055 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:59.055 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.055 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:59.055 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:59.055 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:59.055 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:59.055 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:59.055 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:59.055 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:59.055 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:59.055 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:59.055 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:59.055 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:59.055 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:59.055 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.055 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:59.386 nvme0n1 00:21:59.386 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.644 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:59.644 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:59.644 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.644 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:59.644 11:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.644 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.644 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:59.644 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.644 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:59.644 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.644 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:59.644 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:21:59.644 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:59.644 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:59.644 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:59.644 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:59.644 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmM0MDdhNTg3M2E0YjJmNDRhZTA1ZmFhMzFmMzllOGUwZjI5N2FmMzU4YWFjMzI1e9mfRQ==: 00:21:59.644 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzlhMDMwODA2N2YyN2E0NmE4N2JhZDNlZjI3MjA4OTQ0ODg4NTk3ZGY4MWJlOTNmW0IdTg==: 00:21:59.644 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:59.644 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:59.644 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmM0MDdhNTg3M2E0YjJmNDRhZTA1ZmFhMzFmMzllOGUwZjI5N2FmMzU4YWFjMzI1e9mfRQ==: 00:21:59.644 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzlhMDMwODA2N2YyN2E0NmE4N2JhZDNlZjI3MjA4OTQ0ODg4NTk3ZGY4MWJlOTNmW0IdTg==: ]] 00:21:59.644 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzlhMDMwODA2N2YyN2E0NmE4N2JhZDNlZjI3MjA4OTQ0ODg4NTk3ZGY4MWJlOTNmW0IdTg==: 00:21:59.644 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:21:59.644 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:59.644 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:59.644 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:59.644 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:59.644 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:59.644 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:59.644 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.644 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:59.644 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.644 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:59.644 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:59.644 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:59.644 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:59.644 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:59.644 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:59.644 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:59.644 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:59.644 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:59.644 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:59.644 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:59.644 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:59.644 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.644 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:59.901 nvme0n1 00:21:59.901 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.901 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:59.901 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:59.901 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.901 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:59.901 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.901 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.901 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:59.901 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.901 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:59.901 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.901 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:59.901 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:21:59.901 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:59.901 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:59.901 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:59.901 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:59.901 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDdmYjZlNGVkYTIzYmUzZTBmMGUwMTMwZDRmMzQ4YjE3mYm7: 00:21:59.901 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmM5NDVjNGVjYTIzYTcyMTUwODNiZjIyNGM2NDE3YmGHQr9X: 00:21:59.901 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:59.901 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:59.901 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDdmYjZlNGVkYTIzYmUzZTBmMGUwMTMwZDRmMzQ4YjE3mYm7: 00:21:59.901 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmM5NDVjNGVjYTIzYTcyMTUwODNiZjIyNGM2NDE3YmGHQr9X: ]] 00:21:59.901 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmM5NDVjNGVjYTIzYTcyMTUwODNiZjIyNGM2NDE3YmGHQr9X: 00:21:59.901 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:21:59.901 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:59.901 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:59.901 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:59.901 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:59.901 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:59.901 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:59.901 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.901 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:59.901 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.901 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:59.901 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:59.901 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:59.901 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:59.901 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:59.901 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:59.901 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:59.901 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:59.901 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:59.901 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:59.901 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:59.901 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:59.901 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.901 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.468 nvme0n1 00:22:00.468 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.468 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:00.468 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.468 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:00.468 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.468 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.468 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:00.468 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:00.468 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.468 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.468 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.468 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:00.468 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:22:00.468 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:00.468 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:00.468 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:00.468 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:00.468 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGI1MGNlN2ViZWZhM2I4MTgzMTNlOWY0YzAxY2Y4OWY1NGU4MjdjMmQxZWY5YWE0JcQyCw==: 00:22:00.468 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjVlZTczMzM5ZDk5NDBiNzc5NWQyMjMyNjFmMzU4NmR2PvMh: 00:22:00.468 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:00.468 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:00.468 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGI1MGNlN2ViZWZhM2I4MTgzMTNlOWY0YzAxY2Y4OWY1NGU4MjdjMmQxZWY5YWE0JcQyCw==: 00:22:00.468 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjVlZTczMzM5ZDk5NDBiNzc5NWQyMjMyNjFmMzU4NmR2PvMh: ]] 00:22:00.468 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjVlZTczMzM5ZDk5NDBiNzc5NWQyMjMyNjFmMzU4NmR2PvMh: 00:22:00.468 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:22:00.468 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:00.468 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:00.468 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:00.468 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:00.468 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:00.468 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:00.468 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.468 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.468 11:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.468 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:00.468 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:00.468 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:00.468 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:00.468 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:00.468 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:00.468 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:00.468 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:00.468 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:00.468 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:00.468 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:00.468 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:00.468 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.468 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:01.034 nvme0n1 00:22:01.034 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.034 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:01.034 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:01.034 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.034 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:01.034 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.034 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:01.034 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:01.034 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.034 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:01.034 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.034 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:01.034 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:22:01.034 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:01.034 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:01.034 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:01.034 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:01.034 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDE5MmNiNjUxNDI4Y2FiYmMzMGUyNTNmZDIyMWI1NWQ1ODk0ZjZlYjQ1NTk5NDZkMGVlZjg0ODYyZmQwOTVlYu6ZJZo=: 00:22:01.034 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:01.034 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:01.034 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:01.034 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDE5MmNiNjUxNDI4Y2FiYmMzMGUyNTNmZDIyMWI1NWQ1ODk0ZjZlYjQ1NTk5NDZkMGVlZjg0ODYyZmQwOTVlYu6ZJZo=: 00:22:01.034 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:01.034 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:22:01.034 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:01.034 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:01.034 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:01.034 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:01.034 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:01.034 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:01.034 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.034 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:01.034 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.034 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:01.034 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:01.034 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:01.034 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:01.034 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:01.034 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:01.034 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:01.034 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:01.034 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:01.034 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:01.034 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:01.034 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:01.034 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.035 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:01.292 nvme0n1 00:22:01.292 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.293 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:01.293 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:01.293 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.293 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:01.293 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.293 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:01.293 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:01.293 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.293 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:01.293 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.293 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:01.293 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:01.293 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:22:01.293 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:01.293 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:01.293 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:01.293 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:01.293 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjg5ODgyNzQwZTY3M2Q0NTkzNDcyNjYzZjU5ZGI4ZDeC/LYh: 00:22:01.293 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjM2ZjY4MTAxYWI2ODMzNjM5ODc4NDQ3ODJmNjU1ZmI4ZmMwNzE5NWIzMWQ4MzllODg4YTUxN2Q0OGI4MTY2MfjOZbc=: 00:22:01.293 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:01.293 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:01.293 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjg5ODgyNzQwZTY3M2Q0NTkzNDcyNjYzZjU5ZGI4ZDeC/LYh: 00:22:01.293 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjM2ZjY4MTAxYWI2ODMzNjM5ODc4NDQ3ODJmNjU1ZmI4ZmMwNzE5NWIzMWQ4MzllODg4YTUxN2Q0OGI4MTY2MfjOZbc=: ]] 00:22:01.293 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjM2ZjY4MTAxYWI2ODMzNjM5ODc4NDQ3ODJmNjU1ZmI4ZmMwNzE5NWIzMWQ4MzllODg4YTUxN2Q0OGI4MTY2MfjOZbc=: 00:22:01.293 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:22:01.293 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:01.293 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:01.293 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:01.293 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:01.293 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:01.551 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:01.551 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.551 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:01.551 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.551 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:01.551 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:01.551 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:01.551 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:01.551 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:01.551 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:01.551 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:01.552 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:01.552 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:01.552 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:01.552 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:01.552 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:01.552 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.552 11:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:02.117 nvme0n1 00:22:02.117 11:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.117 11:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:02.117 11:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.117 11:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:02.117 11:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:02.118 11:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.118 11:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:02.118 11:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:02.118 11:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.118 11:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:02.118 11:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.118 11:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:02.118 11:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:22:02.118 11:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:02.118 11:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:02.118 11:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:02.118 11:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:02.118 11:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmM0MDdhNTg3M2E0YjJmNDRhZTA1ZmFhMzFmMzllOGUwZjI5N2FmMzU4YWFjMzI1e9mfRQ==: 00:22:02.118 11:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzlhMDMwODA2N2YyN2E0NmE4N2JhZDNlZjI3MjA4OTQ0ODg4NTk3ZGY4MWJlOTNmW0IdTg==: 00:22:02.118 11:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:02.118 11:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:02.118 11:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmM0MDdhNTg3M2E0YjJmNDRhZTA1ZmFhMzFmMzllOGUwZjI5N2FmMzU4YWFjMzI1e9mfRQ==: 00:22:02.118 11:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzlhMDMwODA2N2YyN2E0NmE4N2JhZDNlZjI3MjA4OTQ0ODg4NTk3ZGY4MWJlOTNmW0IdTg==: ]] 00:22:02.118 11:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzlhMDMwODA2N2YyN2E0NmE4N2JhZDNlZjI3MjA4OTQ0ODg4NTk3ZGY4MWJlOTNmW0IdTg==: 00:22:02.118 11:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:22:02.118 11:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:02.118 11:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:02.118 11:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:02.118 11:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:02.118 11:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:02.118 11:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:02.118 11:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.118 11:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:02.118 11:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.118 11:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:02.118 11:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:02.118 11:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:02.118 11:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:02.118 11:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:02.118 11:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:02.118 11:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:02.118 11:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:02.118 11:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:02.118 11:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:02.118 11:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:02.118 11:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:02.118 11:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.118 11:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:02.685 nvme0n1 00:22:02.685 11:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.685 11:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:02.685 11:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.685 11:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:02.685 11:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:02.944 11:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.944 11:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:02.944 11:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:02.944 11:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.944 11:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:02.944 11:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.944 11:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:02.944 11:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:22:02.944 11:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:02.944 11:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:02.944 11:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:02.944 11:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:02.944 11:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDdmYjZlNGVkYTIzYmUzZTBmMGUwMTMwZDRmMzQ4YjE3mYm7: 00:22:02.944 11:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmM5NDVjNGVjYTIzYTcyMTUwODNiZjIyNGM2NDE3YmGHQr9X: 00:22:02.944 11:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:02.944 11:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:02.944 11:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDdmYjZlNGVkYTIzYmUzZTBmMGUwMTMwZDRmMzQ4YjE3mYm7: 00:22:02.944 11:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmM5NDVjNGVjYTIzYTcyMTUwODNiZjIyNGM2NDE3YmGHQr9X: ]] 00:22:02.944 11:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmM5NDVjNGVjYTIzYTcyMTUwODNiZjIyNGM2NDE3YmGHQr9X: 00:22:02.944 11:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:22:02.944 11:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:02.944 11:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:02.944 11:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:02.944 11:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:02.944 11:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:02.944 11:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:02.944 11:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.944 11:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:02.944 11:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.944 11:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:02.944 11:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:02.944 11:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:02.944 11:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:02.944 11:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:02.944 11:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:02.944 11:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:02.944 11:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:02.944 11:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:02.944 11:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:02.944 11:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:02.944 11:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:02.944 11:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.944 11:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:03.510 nvme0n1 00:22:03.510 11:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.510 11:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:03.510 11:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:03.510 11:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.510 11:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:03.510 11:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.510 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:03.510 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:03.510 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.510 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:03.510 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.510 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:03.510 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:22:03.510 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:03.510 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:03.510 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:03.510 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:03.510 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGI1MGNlN2ViZWZhM2I4MTgzMTNlOWY0YzAxY2Y4OWY1NGU4MjdjMmQxZWY5YWE0JcQyCw==: 00:22:03.510 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjVlZTczMzM5ZDk5NDBiNzc5NWQyMjMyNjFmMzU4NmR2PvMh: 00:22:03.510 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:03.510 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:03.510 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGI1MGNlN2ViZWZhM2I4MTgzMTNlOWY0YzAxY2Y4OWY1NGU4MjdjMmQxZWY5YWE0JcQyCw==: 00:22:03.510 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjVlZTczMzM5ZDk5NDBiNzc5NWQyMjMyNjFmMzU4NmR2PvMh: ]] 00:22:03.510 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjVlZTczMzM5ZDk5NDBiNzc5NWQyMjMyNjFmMzU4NmR2PvMh: 00:22:03.510 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:22:03.510 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:03.510 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:03.510 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:03.510 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:03.510 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:03.510 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:03.510 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.510 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:03.510 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.510 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:03.510 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:03.510 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:03.510 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:03.510 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:03.510 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:03.510 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:03.510 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:03.510 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:03.510 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:03.510 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:03.510 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:03.510 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.510 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:04.479 nvme0n1 00:22:04.479 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.479 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:04.479 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.479 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:04.479 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:04.479 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.479 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:04.479 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:04.479 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.479 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:04.479 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.479 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:04.479 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:22:04.479 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:04.479 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:04.479 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:04.479 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:04.479 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDE5MmNiNjUxNDI4Y2FiYmMzMGUyNTNmZDIyMWI1NWQ1ODk0ZjZlYjQ1NTk5NDZkMGVlZjg0ODYyZmQwOTVlYu6ZJZo=: 00:22:04.479 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:04.479 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:04.479 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:04.479 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDE5MmNiNjUxNDI4Y2FiYmMzMGUyNTNmZDIyMWI1NWQ1ODk0ZjZlYjQ1NTk5NDZkMGVlZjg0ODYyZmQwOTVlYu6ZJZo=: 00:22:04.479 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:04.479 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:22:04.479 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:04.479 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:04.479 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:04.479 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:04.479 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:04.479 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:04.479 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.479 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:04.479 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.479 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:04.479 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:04.479 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:04.479 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:04.479 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:04.479 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:04.479 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:04.479 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:04.479 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:04.479 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:04.479 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:04.479 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:04.479 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.479 11:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:05.045 nvme0n1 00:22:05.046 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.046 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:05.046 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:05.046 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.046 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:05.046 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.046 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:05.046 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:05.046 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.046 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:05.046 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.046 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:22:05.046 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:05.046 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:05.046 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:22:05.046 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:05.046 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:05.046 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:05.046 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:05.046 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjg5ODgyNzQwZTY3M2Q0NTkzNDcyNjYzZjU5ZGI4ZDeC/LYh: 00:22:05.046 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjM2ZjY4MTAxYWI2ODMzNjM5ODc4NDQ3ODJmNjU1ZmI4ZmMwNzE5NWIzMWQ4MzllODg4YTUxN2Q0OGI4MTY2MfjOZbc=: 00:22:05.046 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:05.046 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:05.046 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjg5ODgyNzQwZTY3M2Q0NTkzNDcyNjYzZjU5ZGI4ZDeC/LYh: 00:22:05.046 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjM2ZjY4MTAxYWI2ODMzNjM5ODc4NDQ3ODJmNjU1ZmI4ZmMwNzE5NWIzMWQ4MzllODg4YTUxN2Q0OGI4MTY2MfjOZbc=: ]] 00:22:05.046 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjM2ZjY4MTAxYWI2ODMzNjM5ODc4NDQ3ODJmNjU1ZmI4ZmMwNzE5NWIzMWQ4MzllODg4YTUxN2Q0OGI4MTY2MfjOZbc=: 00:22:05.046 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:22:05.046 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:05.046 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:05.046 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:05.046 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:05.046 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:05.046 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:05.046 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.046 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:05.046 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.046 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:05.046 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:05.046 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:05.046 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:05.046 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:05.046 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:05.046 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:05.046 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:05.046 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:05.046 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:05.046 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:05.046 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:05.305 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.305 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:05.305 nvme0n1 00:22:05.305 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.305 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:05.305 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:05.305 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.305 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:05.305 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.305 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:05.305 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:05.305 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.305 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:05.305 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.305 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:05.305 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:22:05.305 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:05.305 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:05.305 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:05.305 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:05.305 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmM0MDdhNTg3M2E0YjJmNDRhZTA1ZmFhMzFmMzllOGUwZjI5N2FmMzU4YWFjMzI1e9mfRQ==: 00:22:05.305 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzlhMDMwODA2N2YyN2E0NmE4N2JhZDNlZjI3MjA4OTQ0ODg4NTk3ZGY4MWJlOTNmW0IdTg==: 00:22:05.305 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:05.305 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:05.305 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmM0MDdhNTg3M2E0YjJmNDRhZTA1ZmFhMzFmMzllOGUwZjI5N2FmMzU4YWFjMzI1e9mfRQ==: 00:22:05.305 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzlhMDMwODA2N2YyN2E0NmE4N2JhZDNlZjI3MjA4OTQ0ODg4NTk3ZGY4MWJlOTNmW0IdTg==: ]] 00:22:05.305 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzlhMDMwODA2N2YyN2E0NmE4N2JhZDNlZjI3MjA4OTQ0ODg4NTk3ZGY4MWJlOTNmW0IdTg==: 00:22:05.305 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:22:05.305 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:05.305 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:05.305 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:05.305 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:05.305 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:05.305 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:05.305 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.305 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:05.305 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.305 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:05.305 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:05.305 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:05.305 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:05.305 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:05.305 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:05.305 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:05.305 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:05.305 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:05.305 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:05.305 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:05.305 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:05.305 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.305 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:05.564 nvme0n1 00:22:05.564 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.564 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:05.564 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.564 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:05.564 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:05.564 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.564 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:05.564 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:05.564 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.564 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:05.564 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.564 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:05.564 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:22:05.564 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:05.564 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:05.564 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:05.564 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:05.564 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDdmYjZlNGVkYTIzYmUzZTBmMGUwMTMwZDRmMzQ4YjE3mYm7: 00:22:05.564 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmM5NDVjNGVjYTIzYTcyMTUwODNiZjIyNGM2NDE3YmGHQr9X: 00:22:05.564 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:05.564 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:05.564 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDdmYjZlNGVkYTIzYmUzZTBmMGUwMTMwZDRmMzQ4YjE3mYm7: 00:22:05.564 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmM5NDVjNGVjYTIzYTcyMTUwODNiZjIyNGM2NDE3YmGHQr9X: ]] 00:22:05.564 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmM5NDVjNGVjYTIzYTcyMTUwODNiZjIyNGM2NDE3YmGHQr9X: 00:22:05.564 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:22:05.564 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:05.564 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:05.564 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:05.564 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:05.564 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:05.564 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:05.564 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.564 11:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:05.564 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.564 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:05.564 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:05.564 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:05.564 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:05.564 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:05.564 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:05.564 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:05.564 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:05.564 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:05.564 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:05.564 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:05.564 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:05.564 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.564 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:05.564 nvme0n1 00:22:05.564 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.564 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:05.564 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:05.564 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.564 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:05.564 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGI1MGNlN2ViZWZhM2I4MTgzMTNlOWY0YzAxY2Y4OWY1NGU4MjdjMmQxZWY5YWE0JcQyCw==: 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjVlZTczMzM5ZDk5NDBiNzc5NWQyMjMyNjFmMzU4NmR2PvMh: 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGI1MGNlN2ViZWZhM2I4MTgzMTNlOWY0YzAxY2Y4OWY1NGU4MjdjMmQxZWY5YWE0JcQyCw==: 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjVlZTczMzM5ZDk5NDBiNzc5NWQyMjMyNjFmMzU4NmR2PvMh: ]] 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjVlZTczMzM5ZDk5NDBiNzc5NWQyMjMyNjFmMzU4NmR2PvMh: 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:05.824 nvme0n1 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDE5MmNiNjUxNDI4Y2FiYmMzMGUyNTNmZDIyMWI1NWQ1ODk0ZjZlYjQ1NTk5NDZkMGVlZjg0ODYyZmQwOTVlYu6ZJZo=: 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDE5MmNiNjUxNDI4Y2FiYmMzMGUyNTNmZDIyMWI1NWQ1ODk0ZjZlYjQ1NTk5NDZkMGVlZjg0ODYyZmQwOTVlYu6ZJZo=: 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.824 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.083 nvme0n1 00:22:06.083 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.083 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:06.083 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:06.083 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.083 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.083 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.083 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.083 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:06.083 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.083 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.083 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.083 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:06.083 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:06.083 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:22:06.083 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:06.083 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:06.083 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:06.083 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:06.083 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjg5ODgyNzQwZTY3M2Q0NTkzNDcyNjYzZjU5ZGI4ZDeC/LYh: 00:22:06.083 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjM2ZjY4MTAxYWI2ODMzNjM5ODc4NDQ3ODJmNjU1ZmI4ZmMwNzE5NWIzMWQ4MzllODg4YTUxN2Q0OGI4MTY2MfjOZbc=: 00:22:06.083 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:06.083 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:06.083 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjg5ODgyNzQwZTY3M2Q0NTkzNDcyNjYzZjU5ZGI4ZDeC/LYh: 00:22:06.083 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjM2ZjY4MTAxYWI2ODMzNjM5ODc4NDQ3ODJmNjU1ZmI4ZmMwNzE5NWIzMWQ4MzllODg4YTUxN2Q0OGI4MTY2MfjOZbc=: ]] 00:22:06.083 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjM2ZjY4MTAxYWI2ODMzNjM5ODc4NDQ3ODJmNjU1ZmI4ZmMwNzE5NWIzMWQ4MzllODg4YTUxN2Q0OGI4MTY2MfjOZbc=: 00:22:06.083 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:22:06.083 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:06.083 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:06.083 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:06.083 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:06.083 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:06.083 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:06.083 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.083 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.083 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.083 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:06.083 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:06.083 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:06.084 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:06.084 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:06.084 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:06.084 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:06.084 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:06.084 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:06.084 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:06.084 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:06.084 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:06.084 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.084 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.342 nvme0n1 00:22:06.342 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.342 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:06.342 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:06.342 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.342 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.342 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.342 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.342 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:06.342 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.342 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.342 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.342 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:06.342 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:22:06.342 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:06.342 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:06.342 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:06.342 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:06.342 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmM0MDdhNTg3M2E0YjJmNDRhZTA1ZmFhMzFmMzllOGUwZjI5N2FmMzU4YWFjMzI1e9mfRQ==: 00:22:06.342 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzlhMDMwODA2N2YyN2E0NmE4N2JhZDNlZjI3MjA4OTQ0ODg4NTk3ZGY4MWJlOTNmW0IdTg==: 00:22:06.342 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:06.342 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:06.342 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmM0MDdhNTg3M2E0YjJmNDRhZTA1ZmFhMzFmMzllOGUwZjI5N2FmMzU4YWFjMzI1e9mfRQ==: 00:22:06.342 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzlhMDMwODA2N2YyN2E0NmE4N2JhZDNlZjI3MjA4OTQ0ODg4NTk3ZGY4MWJlOTNmW0IdTg==: ]] 00:22:06.342 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzlhMDMwODA2N2YyN2E0NmE4N2JhZDNlZjI3MjA4OTQ0ODg4NTk3ZGY4MWJlOTNmW0IdTg==: 00:22:06.342 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:22:06.342 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:06.342 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:06.342 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:06.342 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:06.343 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:06.343 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:06.343 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.343 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.343 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.343 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:06.343 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:06.343 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:06.343 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:06.343 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:06.343 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:06.343 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:06.343 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:06.343 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:06.343 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:06.343 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:06.343 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:06.343 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.343 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.343 nvme0n1 00:22:06.343 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.343 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:06.343 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.343 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.343 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:06.343 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.343 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.343 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:06.343 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.343 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.343 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.343 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:06.343 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:22:06.343 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:06.343 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:06.343 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:06.343 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:06.343 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDdmYjZlNGVkYTIzYmUzZTBmMGUwMTMwZDRmMzQ4YjE3mYm7: 00:22:06.343 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmM5NDVjNGVjYTIzYTcyMTUwODNiZjIyNGM2NDE3YmGHQr9X: 00:22:06.343 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:06.343 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:06.343 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDdmYjZlNGVkYTIzYmUzZTBmMGUwMTMwZDRmMzQ4YjE3mYm7: 00:22:06.343 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmM5NDVjNGVjYTIzYTcyMTUwODNiZjIyNGM2NDE3YmGHQr9X: ]] 00:22:06.343 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmM5NDVjNGVjYTIzYTcyMTUwODNiZjIyNGM2NDE3YmGHQr9X: 00:22:06.343 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:22:06.343 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:06.343 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:06.343 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:06.343 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:06.343 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:06.343 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:06.343 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.343 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.601 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.601 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:06.601 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:06.601 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:06.601 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:06.601 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:06.601 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:06.601 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:06.601 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:06.601 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:06.601 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:06.601 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:06.601 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:06.601 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.601 11:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.601 nvme0n1 00:22:06.601 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.601 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:06.601 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.601 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:06.602 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.602 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.602 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.602 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:06.602 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.602 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.602 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.602 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:06.602 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:22:06.602 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:06.602 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:06.602 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:06.602 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:06.602 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGI1MGNlN2ViZWZhM2I4MTgzMTNlOWY0YzAxY2Y4OWY1NGU4MjdjMmQxZWY5YWE0JcQyCw==: 00:22:06.602 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjVlZTczMzM5ZDk5NDBiNzc5NWQyMjMyNjFmMzU4NmR2PvMh: 00:22:06.602 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:06.602 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:06.602 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGI1MGNlN2ViZWZhM2I4MTgzMTNlOWY0YzAxY2Y4OWY1NGU4MjdjMmQxZWY5YWE0JcQyCw==: 00:22:06.602 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjVlZTczMzM5ZDk5NDBiNzc5NWQyMjMyNjFmMzU4NmR2PvMh: ]] 00:22:06.602 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjVlZTczMzM5ZDk5NDBiNzc5NWQyMjMyNjFmMzU4NmR2PvMh: 00:22:06.602 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:22:06.602 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:06.602 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:06.602 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:06.602 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:06.602 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:06.602 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:06.602 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.602 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.602 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.602 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:06.602 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:06.602 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:06.602 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:06.602 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:06.602 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:06.602 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:06.602 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:06.602 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:06.602 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:06.602 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:06.602 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:06.602 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.602 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.861 nvme0n1 00:22:06.861 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.861 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:06.861 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:06.861 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.861 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.861 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.861 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.861 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:06.861 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.861 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.861 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.861 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:06.861 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:22:06.861 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:06.861 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:06.861 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:06.861 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:06.861 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDE5MmNiNjUxNDI4Y2FiYmMzMGUyNTNmZDIyMWI1NWQ1ODk0ZjZlYjQ1NTk5NDZkMGVlZjg0ODYyZmQwOTVlYu6ZJZo=: 00:22:06.861 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:06.861 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:06.861 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:06.861 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDE5MmNiNjUxNDI4Y2FiYmMzMGUyNTNmZDIyMWI1NWQ1ODk0ZjZlYjQ1NTk5NDZkMGVlZjg0ODYyZmQwOTVlYu6ZJZo=: 00:22:06.861 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:06.861 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:22:06.861 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:06.861 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:06.861 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:06.861 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:06.861 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:06.861 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:06.861 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.861 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.861 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.861 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:06.861 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:06.861 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:06.861 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:06.861 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:06.861 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:06.861 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:06.861 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:06.861 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:06.861 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:06.861 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:06.861 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:06.861 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.861 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:07.120 nvme0n1 00:22:07.120 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.120 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:07.120 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:07.120 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.120 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:07.120 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.120 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:07.120 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:07.120 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.120 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:07.120 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.120 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:07.120 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:07.120 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:22:07.120 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:07.120 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:07.120 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:07.120 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:07.120 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjg5ODgyNzQwZTY3M2Q0NTkzNDcyNjYzZjU5ZGI4ZDeC/LYh: 00:22:07.120 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjM2ZjY4MTAxYWI2ODMzNjM5ODc4NDQ3ODJmNjU1ZmI4ZmMwNzE5NWIzMWQ4MzllODg4YTUxN2Q0OGI4MTY2MfjOZbc=: 00:22:07.120 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:07.120 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:07.120 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjg5ODgyNzQwZTY3M2Q0NTkzNDcyNjYzZjU5ZGI4ZDeC/LYh: 00:22:07.120 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjM2ZjY4MTAxYWI2ODMzNjM5ODc4NDQ3ODJmNjU1ZmI4ZmMwNzE5NWIzMWQ4MzllODg4YTUxN2Q0OGI4MTY2MfjOZbc=: ]] 00:22:07.120 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjM2ZjY4MTAxYWI2ODMzNjM5ODc4NDQ3ODJmNjU1ZmI4ZmMwNzE5NWIzMWQ4MzllODg4YTUxN2Q0OGI4MTY2MfjOZbc=: 00:22:07.120 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:22:07.120 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:07.120 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:07.120 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:07.120 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:07.120 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:07.120 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:07.120 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.120 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:07.120 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.120 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:07.120 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:07.120 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:07.120 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:07.120 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:07.120 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:07.120 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:07.120 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:07.120 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:07.120 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:07.120 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:07.120 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:07.120 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.120 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:07.379 nvme0n1 00:22:07.379 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.379 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:07.379 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.379 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:07.379 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:07.379 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.379 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:07.379 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:07.379 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.379 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:07.379 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.379 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:07.379 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:22:07.379 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:07.379 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:07.379 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:07.379 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:07.379 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmM0MDdhNTg3M2E0YjJmNDRhZTA1ZmFhMzFmMzllOGUwZjI5N2FmMzU4YWFjMzI1e9mfRQ==: 00:22:07.379 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzlhMDMwODA2N2YyN2E0NmE4N2JhZDNlZjI3MjA4OTQ0ODg4NTk3ZGY4MWJlOTNmW0IdTg==: 00:22:07.379 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:07.379 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:07.379 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmM0MDdhNTg3M2E0YjJmNDRhZTA1ZmFhMzFmMzllOGUwZjI5N2FmMzU4YWFjMzI1e9mfRQ==: 00:22:07.379 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzlhMDMwODA2N2YyN2E0NmE4N2JhZDNlZjI3MjA4OTQ0ODg4NTk3ZGY4MWJlOTNmW0IdTg==: ]] 00:22:07.379 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzlhMDMwODA2N2YyN2E0NmE4N2JhZDNlZjI3MjA4OTQ0ODg4NTk3ZGY4MWJlOTNmW0IdTg==: 00:22:07.379 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:22:07.379 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:07.380 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:07.380 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:07.380 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:07.380 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:07.380 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:07.380 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.380 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:07.380 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.380 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:07.380 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:07.380 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:07.380 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:07.380 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:07.380 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:07.380 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:07.380 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:07.380 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:07.380 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:07.380 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:07.380 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:07.380 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.380 11:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:07.638 nvme0n1 00:22:07.638 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.638 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:07.638 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:07.638 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.638 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:07.638 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.638 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:07.638 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:07.638 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.638 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:07.638 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.638 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:07.638 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:22:07.638 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:07.638 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:07.638 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:07.638 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:07.638 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDdmYjZlNGVkYTIzYmUzZTBmMGUwMTMwZDRmMzQ4YjE3mYm7: 00:22:07.638 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmM5NDVjNGVjYTIzYTcyMTUwODNiZjIyNGM2NDE3YmGHQr9X: 00:22:07.638 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:07.638 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:07.638 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDdmYjZlNGVkYTIzYmUzZTBmMGUwMTMwZDRmMzQ4YjE3mYm7: 00:22:07.638 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmM5NDVjNGVjYTIzYTcyMTUwODNiZjIyNGM2NDE3YmGHQr9X: ]] 00:22:07.638 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmM5NDVjNGVjYTIzYTcyMTUwODNiZjIyNGM2NDE3YmGHQr9X: 00:22:07.638 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:22:07.638 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:07.638 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:07.638 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:07.638 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:07.638 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:07.638 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:07.638 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.638 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:07.914 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.914 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:07.914 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:07.914 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:07.914 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:07.914 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:07.914 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:07.914 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:07.914 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:07.914 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:07.914 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:07.914 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:07.914 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:07.914 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.914 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:07.914 nvme0n1 00:22:07.914 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.914 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:07.914 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:07.914 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.914 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:07.914 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.914 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:07.914 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:07.914 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.914 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.179 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.179 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:08.179 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:22:08.179 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:08.179 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:08.179 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:08.179 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:08.179 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGI1MGNlN2ViZWZhM2I4MTgzMTNlOWY0YzAxY2Y4OWY1NGU4MjdjMmQxZWY5YWE0JcQyCw==: 00:22:08.179 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjVlZTczMzM5ZDk5NDBiNzc5NWQyMjMyNjFmMzU4NmR2PvMh: 00:22:08.179 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:08.179 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:08.179 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGI1MGNlN2ViZWZhM2I4MTgzMTNlOWY0YzAxY2Y4OWY1NGU4MjdjMmQxZWY5YWE0JcQyCw==: 00:22:08.179 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjVlZTczMzM5ZDk5NDBiNzc5NWQyMjMyNjFmMzU4NmR2PvMh: ]] 00:22:08.179 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjVlZTczMzM5ZDk5NDBiNzc5NWQyMjMyNjFmMzU4NmR2PvMh: 00:22:08.179 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:22:08.179 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:08.179 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:08.179 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:08.179 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:08.179 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:08.179 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:08.179 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.179 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.179 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.179 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:08.179 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:08.179 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:08.179 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:08.179 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:08.179 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:08.179 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:08.179 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:08.179 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:08.179 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:08.179 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:08.179 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:08.179 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.179 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.179 nvme0n1 00:22:08.179 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.179 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:08.179 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.179 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:08.179 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.179 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.437 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.437 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:08.437 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.437 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.437 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.437 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:08.437 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:22:08.437 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:08.437 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:08.438 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:08.438 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:08.438 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDE5MmNiNjUxNDI4Y2FiYmMzMGUyNTNmZDIyMWI1NWQ1ODk0ZjZlYjQ1NTk5NDZkMGVlZjg0ODYyZmQwOTVlYu6ZJZo=: 00:22:08.438 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:08.438 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:08.438 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:08.438 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDE5MmNiNjUxNDI4Y2FiYmMzMGUyNTNmZDIyMWI1NWQ1ODk0ZjZlYjQ1NTk5NDZkMGVlZjg0ODYyZmQwOTVlYu6ZJZo=: 00:22:08.438 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:08.438 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:22:08.438 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:08.438 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:08.438 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:08.438 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:08.438 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:08.438 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:08.438 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.438 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.438 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.438 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:08.438 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:08.438 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:08.438 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:08.438 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:08.438 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:08.438 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:08.438 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:08.438 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:08.438 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:08.438 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:08.438 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:08.438 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.438 11:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.697 nvme0n1 00:22:08.697 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.697 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:08.697 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:08.697 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.697 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.697 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.697 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.697 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:08.697 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.697 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.697 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.697 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:08.697 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:08.697 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:22:08.697 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:08.697 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:08.697 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:08.697 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:08.697 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjg5ODgyNzQwZTY3M2Q0NTkzNDcyNjYzZjU5ZGI4ZDeC/LYh: 00:22:08.697 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjM2ZjY4MTAxYWI2ODMzNjM5ODc4NDQ3ODJmNjU1ZmI4ZmMwNzE5NWIzMWQ4MzllODg4YTUxN2Q0OGI4MTY2MfjOZbc=: 00:22:08.697 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:08.697 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:08.697 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjg5ODgyNzQwZTY3M2Q0NTkzNDcyNjYzZjU5ZGI4ZDeC/LYh: 00:22:08.697 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjM2ZjY4MTAxYWI2ODMzNjM5ODc4NDQ3ODJmNjU1ZmI4ZmMwNzE5NWIzMWQ4MzllODg4YTUxN2Q0OGI4MTY2MfjOZbc=: ]] 00:22:08.697 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjM2ZjY4MTAxYWI2ODMzNjM5ODc4NDQ3ODJmNjU1ZmI4ZmMwNzE5NWIzMWQ4MzllODg4YTUxN2Q0OGI4MTY2MfjOZbc=: 00:22:08.698 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:22:08.698 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:08.698 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:08.698 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:08.698 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:08.698 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:08.698 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:08.698 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.698 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.698 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.698 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:08.698 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:08.698 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:08.698 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:08.698 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:08.698 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:08.698 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:08.698 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:08.698 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:08.698 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:08.698 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:08.698 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:08.698 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.698 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.957 nvme0n1 00:22:08.957 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.957 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:08.957 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:08.957 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.957 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.957 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.217 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:09.217 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:09.217 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.217 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:09.217 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.217 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:09.217 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:22:09.217 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:09.217 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:09.217 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:09.217 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:09.217 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmM0MDdhNTg3M2E0YjJmNDRhZTA1ZmFhMzFmMzllOGUwZjI5N2FmMzU4YWFjMzI1e9mfRQ==: 00:22:09.217 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzlhMDMwODA2N2YyN2E0NmE4N2JhZDNlZjI3MjA4OTQ0ODg4NTk3ZGY4MWJlOTNmW0IdTg==: 00:22:09.217 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:09.217 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:09.217 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmM0MDdhNTg3M2E0YjJmNDRhZTA1ZmFhMzFmMzllOGUwZjI5N2FmMzU4YWFjMzI1e9mfRQ==: 00:22:09.217 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzlhMDMwODA2N2YyN2E0NmE4N2JhZDNlZjI3MjA4OTQ0ODg4NTk3ZGY4MWJlOTNmW0IdTg==: ]] 00:22:09.217 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzlhMDMwODA2N2YyN2E0NmE4N2JhZDNlZjI3MjA4OTQ0ODg4NTk3ZGY4MWJlOTNmW0IdTg==: 00:22:09.217 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:22:09.217 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:09.217 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:09.217 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:09.217 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:09.217 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:09.217 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:09.217 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.217 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:09.217 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.217 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:09.217 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:09.217 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:09.217 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:09.217 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:09.217 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:09.217 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:09.217 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:09.217 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:09.217 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:09.217 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:09.217 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:09.217 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.217 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:09.476 nvme0n1 00:22:09.476 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.476 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:09.476 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:09.476 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.476 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:09.476 11:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.476 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:09.476 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:09.476 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.476 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:09.476 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.476 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:09.476 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:22:09.476 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:09.476 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:09.476 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:09.476 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:09.476 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDdmYjZlNGVkYTIzYmUzZTBmMGUwMTMwZDRmMzQ4YjE3mYm7: 00:22:09.476 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmM5NDVjNGVjYTIzYTcyMTUwODNiZjIyNGM2NDE3YmGHQr9X: 00:22:09.476 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:09.476 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:09.476 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDdmYjZlNGVkYTIzYmUzZTBmMGUwMTMwZDRmMzQ4YjE3mYm7: 00:22:09.476 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmM5NDVjNGVjYTIzYTcyMTUwODNiZjIyNGM2NDE3YmGHQr9X: ]] 00:22:09.476 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmM5NDVjNGVjYTIzYTcyMTUwODNiZjIyNGM2NDE3YmGHQr9X: 00:22:09.476 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:22:09.476 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:09.476 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:09.477 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:09.477 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:09.477 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:09.477 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:09.477 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.477 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:09.477 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.477 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:09.477 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:09.477 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:09.477 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:09.477 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:09.477 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:09.477 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:09.477 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:09.477 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:09.477 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:09.477 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:09.477 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:09.477 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.477 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.043 nvme0n1 00:22:10.043 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.043 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:10.043 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:10.043 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.043 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.043 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.043 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:10.043 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:10.043 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.043 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.043 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.043 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:10.043 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:22:10.043 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:10.043 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:10.043 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:10.043 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:10.043 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGI1MGNlN2ViZWZhM2I4MTgzMTNlOWY0YzAxY2Y4OWY1NGU4MjdjMmQxZWY5YWE0JcQyCw==: 00:22:10.043 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjVlZTczMzM5ZDk5NDBiNzc5NWQyMjMyNjFmMzU4NmR2PvMh: 00:22:10.043 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:10.043 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:10.043 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGI1MGNlN2ViZWZhM2I4MTgzMTNlOWY0YzAxY2Y4OWY1NGU4MjdjMmQxZWY5YWE0JcQyCw==: 00:22:10.043 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjVlZTczMzM5ZDk5NDBiNzc5NWQyMjMyNjFmMzU4NmR2PvMh: ]] 00:22:10.043 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjVlZTczMzM5ZDk5NDBiNzc5NWQyMjMyNjFmMzU4NmR2PvMh: 00:22:10.043 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:22:10.043 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:10.043 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:10.043 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:10.043 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:10.043 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:10.043 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:10.043 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.043 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.043 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.043 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:10.043 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:10.043 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:10.043 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:10.043 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:10.043 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:10.043 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:10.043 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:10.043 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:10.043 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:10.043 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:10.043 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:10.043 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.043 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.300 nvme0n1 00:22:10.300 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.300 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:10.301 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.301 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:10.301 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.301 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.559 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:10.559 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:10.559 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.559 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.559 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.559 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:10.559 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:22:10.559 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:10.559 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:10.559 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:10.559 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:10.559 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDE5MmNiNjUxNDI4Y2FiYmMzMGUyNTNmZDIyMWI1NWQ1ODk0ZjZlYjQ1NTk5NDZkMGVlZjg0ODYyZmQwOTVlYu6ZJZo=: 00:22:10.559 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:10.559 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:10.559 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:10.559 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDE5MmNiNjUxNDI4Y2FiYmMzMGUyNTNmZDIyMWI1NWQ1ODk0ZjZlYjQ1NTk5NDZkMGVlZjg0ODYyZmQwOTVlYu6ZJZo=: 00:22:10.559 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:10.559 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:22:10.559 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:10.559 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:10.559 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:10.559 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:10.559 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:10.559 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:10.559 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.559 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.559 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.559 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:10.559 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:10.559 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:10.559 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:10.559 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:10.559 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:10.559 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:10.559 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:10.559 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:10.559 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:10.559 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:10.559 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:10.559 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.559 11:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:11.126 nvme0n1 00:22:11.126 11:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.126 11:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:11.126 11:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.126 11:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:11.126 11:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:11.126 11:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.126 11:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:11.126 11:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:11.126 11:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.126 11:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:11.126 11:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.126 11:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:11.126 11:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:11.126 11:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:22:11.126 11:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:11.126 11:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:11.126 11:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:11.126 11:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:11.126 11:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjg5ODgyNzQwZTY3M2Q0NTkzNDcyNjYzZjU5ZGI4ZDeC/LYh: 00:22:11.126 11:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjM2ZjY4MTAxYWI2ODMzNjM5ODc4NDQ3ODJmNjU1ZmI4ZmMwNzE5NWIzMWQ4MzllODg4YTUxN2Q0OGI4MTY2MfjOZbc=: 00:22:11.126 11:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:11.126 11:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:11.126 11:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjg5ODgyNzQwZTY3M2Q0NTkzNDcyNjYzZjU5ZGI4ZDeC/LYh: 00:22:11.126 11:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjM2ZjY4MTAxYWI2ODMzNjM5ODc4NDQ3ODJmNjU1ZmI4ZmMwNzE5NWIzMWQ4MzllODg4YTUxN2Q0OGI4MTY2MfjOZbc=: ]] 00:22:11.126 11:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjM2ZjY4MTAxYWI2ODMzNjM5ODc4NDQ3ODJmNjU1ZmI4ZmMwNzE5NWIzMWQ4MzllODg4YTUxN2Q0OGI4MTY2MfjOZbc=: 00:22:11.126 11:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:22:11.126 11:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:11.126 11:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:11.126 11:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:11.126 11:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:11.126 11:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:11.127 11:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:11.127 11:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.127 11:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:11.127 11:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.127 11:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:11.127 11:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:11.127 11:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:11.127 11:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:11.127 11:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:11.127 11:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:11.127 11:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:11.127 11:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:11.127 11:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:11.127 11:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:11.127 11:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:11.127 11:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:11.127 11:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.127 11:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:11.693 nvme0n1 00:22:11.693 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.693 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:11.693 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:11.693 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.693 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:11.693 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.693 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:11.693 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:11.693 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.693 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:11.693 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.693 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:11.693 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:22:11.693 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:11.693 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:11.693 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:11.693 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:11.693 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmM0MDdhNTg3M2E0YjJmNDRhZTA1ZmFhMzFmMzllOGUwZjI5N2FmMzU4YWFjMzI1e9mfRQ==: 00:22:11.693 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzlhMDMwODA2N2YyN2E0NmE4N2JhZDNlZjI3MjA4OTQ0ODg4NTk3ZGY4MWJlOTNmW0IdTg==: 00:22:11.693 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:11.693 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:11.693 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmM0MDdhNTg3M2E0YjJmNDRhZTA1ZmFhMzFmMzllOGUwZjI5N2FmMzU4YWFjMzI1e9mfRQ==: 00:22:11.693 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzlhMDMwODA2N2YyN2E0NmE4N2JhZDNlZjI3MjA4OTQ0ODg4NTk3ZGY4MWJlOTNmW0IdTg==: ]] 00:22:11.693 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzlhMDMwODA2N2YyN2E0NmE4N2JhZDNlZjI3MjA4OTQ0ODg4NTk3ZGY4MWJlOTNmW0IdTg==: 00:22:11.693 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:22:11.693 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:11.693 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:11.693 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:11.693 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:11.693 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:11.693 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:11.693 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.693 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:11.693 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.693 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:11.693 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:11.693 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:11.693 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:11.693 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:11.693 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:11.693 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:11.693 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:11.693 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:11.693 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:11.693 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:11.693 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:11.693 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.693 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:12.627 nvme0n1 00:22:12.627 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.627 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:12.627 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:12.627 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.627 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:12.627 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.627 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:12.627 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:12.627 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.627 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:12.627 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.627 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:12.627 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:22:12.627 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:12.627 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:12.627 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:12.627 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:12.627 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDdmYjZlNGVkYTIzYmUzZTBmMGUwMTMwZDRmMzQ4YjE3mYm7: 00:22:12.627 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmM5NDVjNGVjYTIzYTcyMTUwODNiZjIyNGM2NDE3YmGHQr9X: 00:22:12.627 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:12.627 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:12.627 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDdmYjZlNGVkYTIzYmUzZTBmMGUwMTMwZDRmMzQ4YjE3mYm7: 00:22:12.627 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmM5NDVjNGVjYTIzYTcyMTUwODNiZjIyNGM2NDE3YmGHQr9X: ]] 00:22:12.627 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmM5NDVjNGVjYTIzYTcyMTUwODNiZjIyNGM2NDE3YmGHQr9X: 00:22:12.627 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:22:12.627 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:12.627 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:12.627 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:12.627 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:12.627 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:12.627 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:12.627 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.627 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:12.627 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.627 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:12.627 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:12.627 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:12.627 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:12.627 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:12.627 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:12.627 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:12.627 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:12.627 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:12.627 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:12.627 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:12.627 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:12.627 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.627 11:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:13.193 nvme0n1 00:22:13.193 11:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.193 11:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:13.193 11:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:13.193 11:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.193 11:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:13.193 11:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.193 11:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.193 11:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:13.193 11:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.193 11:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:13.193 11:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.193 11:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:13.193 11:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:22:13.193 11:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:13.193 11:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:13.193 11:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:13.194 11:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:13.194 11:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGI1MGNlN2ViZWZhM2I4MTgzMTNlOWY0YzAxY2Y4OWY1NGU4MjdjMmQxZWY5YWE0JcQyCw==: 00:22:13.194 11:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjVlZTczMzM5ZDk5NDBiNzc5NWQyMjMyNjFmMzU4NmR2PvMh: 00:22:13.194 11:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:13.194 11:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:13.194 11:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGI1MGNlN2ViZWZhM2I4MTgzMTNlOWY0YzAxY2Y4OWY1NGU4MjdjMmQxZWY5YWE0JcQyCw==: 00:22:13.194 11:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjVlZTczMzM5ZDk5NDBiNzc5NWQyMjMyNjFmMzU4NmR2PvMh: ]] 00:22:13.194 11:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjVlZTczMzM5ZDk5NDBiNzc5NWQyMjMyNjFmMzU4NmR2PvMh: 00:22:13.194 11:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:22:13.194 11:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:13.194 11:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:13.194 11:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:13.194 11:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:13.194 11:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:13.194 11:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:13.194 11:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.194 11:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:13.194 11:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.194 11:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:13.194 11:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:13.194 11:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:13.194 11:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:13.194 11:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:13.194 11:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:13.194 11:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:13.194 11:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:13.194 11:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:13.194 11:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:13.194 11:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:13.194 11:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:13.194 11:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.194 11:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:13.761 nvme0n1 00:22:13.761 11:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.761 11:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:13.761 11:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.761 11:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:13.761 11:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:13.761 11:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.100 11:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:14.100 11:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:14.100 11:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.100 11:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:14.100 11:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.100 11:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:14.100 11:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:22:14.100 11:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:14.100 11:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:14.100 11:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:14.100 11:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:14.100 11:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDE5MmNiNjUxNDI4Y2FiYmMzMGUyNTNmZDIyMWI1NWQ1ODk0ZjZlYjQ1NTk5NDZkMGVlZjg0ODYyZmQwOTVlYu6ZJZo=: 00:22:14.100 11:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:14.100 11:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:14.100 11:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:14.100 11:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDE5MmNiNjUxNDI4Y2FiYmMzMGUyNTNmZDIyMWI1NWQ1ODk0ZjZlYjQ1NTk5NDZkMGVlZjg0ODYyZmQwOTVlYu6ZJZo=: 00:22:14.100 11:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:14.100 11:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:22:14.100 11:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:14.100 11:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:14.100 11:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:14.100 11:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:14.100 11:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:14.100 11:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:14.100 11:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.100 11:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:14.100 11:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.100 11:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:14.100 11:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:14.100 11:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:14.100 11:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:14.100 11:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:14.100 11:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:14.100 11:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:14.100 11:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:14.100 11:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:14.100 11:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:14.100 11:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:14.100 11:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:14.100 11:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.100 11:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:14.668 nvme0n1 00:22:14.668 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.668 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:14.668 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:14.668 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.668 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:14.668 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.668 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:14.668 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:14.668 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.668 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:14.668 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.668 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:22:14.668 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:14.668 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:14.668 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:14.668 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:14.668 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmM0MDdhNTg3M2E0YjJmNDRhZTA1ZmFhMzFmMzllOGUwZjI5N2FmMzU4YWFjMzI1e9mfRQ==: 00:22:14.668 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzlhMDMwODA2N2YyN2E0NmE4N2JhZDNlZjI3MjA4OTQ0ODg4NTk3ZGY4MWJlOTNmW0IdTg==: 00:22:14.668 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:14.668 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:14.668 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmM0MDdhNTg3M2E0YjJmNDRhZTA1ZmFhMzFmMzllOGUwZjI5N2FmMzU4YWFjMzI1e9mfRQ==: 00:22:14.668 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzlhMDMwODA2N2YyN2E0NmE4N2JhZDNlZjI3MjA4OTQ0ODg4NTk3ZGY4MWJlOTNmW0IdTg==: ]] 00:22:14.668 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzlhMDMwODA2N2YyN2E0NmE4N2JhZDNlZjI3MjA4OTQ0ODg4NTk3ZGY4MWJlOTNmW0IdTg==: 00:22:14.668 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:14.668 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.668 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:14.668 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.668 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:22:14.668 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:14.668 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:14.668 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:14.669 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:14.669 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:14.669 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:14.669 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:14.669 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:14.669 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:14.669 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:14.669 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:22:14.669 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:22:14.669 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:22:14.669 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:14.669 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:14.669 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:14.669 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:14.669 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:22:14.669 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.669 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:14.669 2024/11/20 11:35:00 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:22:14.669 request: 00:22:14.669 { 00:22:14.669 "method": "bdev_nvme_attach_controller", 00:22:14.669 "params": { 00:22:14.669 "name": "nvme0", 00:22:14.669 "trtype": "tcp", 00:22:14.669 "traddr": "10.0.0.1", 00:22:14.669 "adrfam": "ipv4", 00:22:14.669 "trsvcid": "4420", 00:22:14.669 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:22:14.669 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:22:14.669 "prchk_reftag": false, 00:22:14.669 "prchk_guard": false, 00:22:14.669 "hdgst": false, 00:22:14.669 "ddgst": false, 00:22:14.669 "allow_unrecognized_csi": false 00:22:14.669 } 00:22:14.669 } 00:22:14.669 Got JSON-RPC error response 00:22:14.669 GoRPCClient: error on JSON-RPC call 00:22:14.669 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:14.669 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:22:14.669 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:14.669 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:14.669 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:14.669 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:22:14.669 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:22:14.669 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.669 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:14.669 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.669 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:22:14.669 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:22:14.669 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:14.669 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:14.669 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:14.669 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:14.669 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:14.669 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:14.669 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:14.669 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:14.669 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:14.669 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:14.669 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:14.669 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:22:14.669 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:14.669 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:14.669 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:14.669 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:14.669 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:14.669 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:14.669 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.669 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:14.669 2024/11/20 11:35:00 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:22:14.669 request: 00:22:14.669 { 00:22:14.669 "method": "bdev_nvme_attach_controller", 00:22:14.669 "params": { 00:22:14.669 "name": "nvme0", 00:22:14.669 "trtype": "tcp", 00:22:14.669 "traddr": "10.0.0.1", 00:22:14.669 "adrfam": "ipv4", 00:22:14.669 "trsvcid": "4420", 00:22:14.669 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:22:14.669 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:22:14.669 "prchk_reftag": false, 00:22:14.669 "prchk_guard": false, 00:22:14.669 "hdgst": false, 00:22:14.669 "ddgst": false, 00:22:14.669 "dhchap_key": "key2", 00:22:14.669 "allow_unrecognized_csi": false 00:22:14.669 } 00:22:14.669 } 00:22:14.669 Got JSON-RPC error response 00:22:14.669 GoRPCClient: error on JSON-RPC call 00:22:14.669 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:14.669 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:22:14.669 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:14.669 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:14.669 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:14.669 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:22:14.669 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:22:14.669 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.669 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:14.669 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.669 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:22:14.669 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:22:14.669 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:14.669 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:14.669 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:14.669 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:14.669 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:14.669 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:14.928 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:14.928 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:14.928 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:14.928 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:14.928 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:14.928 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:22:14.928 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:14.928 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:14.928 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:14.928 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:14.928 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:14.928 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:14.928 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.928 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:14.928 2024/11/20 11:35:00 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:22:14.928 request: 00:22:14.929 { 00:22:14.929 "method": "bdev_nvme_attach_controller", 00:22:14.929 "params": { 00:22:14.929 "name": "nvme0", 00:22:14.929 "trtype": "tcp", 00:22:14.929 "traddr": "10.0.0.1", 00:22:14.929 "adrfam": "ipv4", 00:22:14.929 "trsvcid": "4420", 00:22:14.929 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:22:14.929 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:22:14.929 "prchk_reftag": false, 00:22:14.929 "prchk_guard": false, 00:22:14.929 "hdgst": false, 00:22:14.929 "ddgst": false, 00:22:14.929 "dhchap_key": "key1", 00:22:14.929 "dhchap_ctrlr_key": "ckey2", 00:22:14.929 "allow_unrecognized_csi": false 00:22:14.929 } 00:22:14.929 } 00:22:14.929 Got JSON-RPC error response 00:22:14.929 GoRPCClient: error on JSON-RPC call 00:22:14.929 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:14.929 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:22:14.929 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:14.929 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:14.929 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:14.929 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:22:14.929 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:14.929 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:14.929 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:14.929 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:14.929 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:14.929 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:14.929 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:14.929 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:14.929 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:14.929 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:14.929 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:14.929 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.929 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:14.929 nvme0n1 00:22:14.929 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.929 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:22:14.929 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:14.929 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:14.929 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:14.929 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:14.929 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDdmYjZlNGVkYTIzYmUzZTBmMGUwMTMwZDRmMzQ4YjE3mYm7: 00:22:14.929 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmM5NDVjNGVjYTIzYTcyMTUwODNiZjIyNGM2NDE3YmGHQr9X: 00:22:14.929 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:14.929 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:14.929 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDdmYjZlNGVkYTIzYmUzZTBmMGUwMTMwZDRmMzQ4YjE3mYm7: 00:22:14.929 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmM5NDVjNGVjYTIzYTcyMTUwODNiZjIyNGM2NDE3YmGHQr9X: ]] 00:22:14.929 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmM5NDVjNGVjYTIzYTcyMTUwODNiZjIyNGM2NDE3YmGHQr9X: 00:22:14.929 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:14.929 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.929 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:14.929 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.929 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:22:14.929 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.929 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:22:14.929 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:14.929 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.929 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:14.929 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:14.929 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:22:14.929 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:14.929 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:14.929 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:14.929 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:14.929 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:14.929 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:14.929 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.929 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:14.929 2024/11/20 11:35:00 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:ckey2 dhchap_key:key1 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 00:22:14.929 request: 00:22:14.929 { 00:22:14.929 "method": "bdev_nvme_set_keys", 00:22:14.929 "params": { 00:22:14.929 "name": "nvme0", 00:22:14.929 "dhchap_key": "key1", 00:22:14.929 "dhchap_ctrlr_key": "ckey2" 00:22:14.929 } 00:22:14.929 } 00:22:14.929 Got JSON-RPC error response 00:22:14.929 GoRPCClient: error on JSON-RPC call 00:22:14.929 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:14.929 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:22:14.929 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:14.929 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:14.929 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:14.929 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:22:14.929 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:22:14.929 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.929 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:15.187 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.187 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:22:15.187 11:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:22:16.122 11:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:22:16.122 11:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:22:16.122 11:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.122 11:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:16.122 11:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.122 11:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:22:16.122 11:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:22:16.122 11:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:16.122 11:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:16.122 11:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:16.122 11:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:16.122 11:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmM0MDdhNTg3M2E0YjJmNDRhZTA1ZmFhMzFmMzllOGUwZjI5N2FmMzU4YWFjMzI1e9mfRQ==: 00:22:16.122 11:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzlhMDMwODA2N2YyN2E0NmE4N2JhZDNlZjI3MjA4OTQ0ODg4NTk3ZGY4MWJlOTNmW0IdTg==: 00:22:16.122 11:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:16.122 11:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:16.122 11:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmM0MDdhNTg3M2E0YjJmNDRhZTA1ZmFhMzFmMzllOGUwZjI5N2FmMzU4YWFjMzI1e9mfRQ==: 00:22:16.122 11:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzlhMDMwODA2N2YyN2E0NmE4N2JhZDNlZjI3MjA4OTQ0ODg4NTk3ZGY4MWJlOTNmW0IdTg==: ]] 00:22:16.122 11:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzlhMDMwODA2N2YyN2E0NmE4N2JhZDNlZjI3MjA4OTQ0ODg4NTk3ZGY4MWJlOTNmW0IdTg==: 00:22:16.122 11:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:22:16.122 11:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:16.122 11:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:16.122 11:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:16.122 11:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:16.122 11:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:16.122 11:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:16.122 11:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:16.122 11:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:16.122 11:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:16.122 11:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:16.122 11:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:16.122 11:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.122 11:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:16.381 nvme0n1 00:22:16.381 11:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.381 11:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:22:16.381 11:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:16.381 11:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:16.381 11:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:16.381 11:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:16.381 11:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDdmYjZlNGVkYTIzYmUzZTBmMGUwMTMwZDRmMzQ4YjE3mYm7: 00:22:16.381 11:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmM5NDVjNGVjYTIzYTcyMTUwODNiZjIyNGM2NDE3YmGHQr9X: 00:22:16.381 11:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:16.381 11:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:16.381 11:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDdmYjZlNGVkYTIzYmUzZTBmMGUwMTMwZDRmMzQ4YjE3mYm7: 00:22:16.381 11:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmM5NDVjNGVjYTIzYTcyMTUwODNiZjIyNGM2NDE3YmGHQr9X: ]] 00:22:16.381 11:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmM5NDVjNGVjYTIzYTcyMTUwODNiZjIyNGM2NDE3YmGHQr9X: 00:22:16.381 11:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:22:16.381 11:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:22:16.381 11:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:22:16.381 11:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:16.381 11:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:16.381 11:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:16.381 11:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:16.381 11:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:22:16.381 11:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.381 11:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:16.381 2024/11/20 11:35:01 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:ckey1 dhchap_key:key2 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 00:22:16.381 request: 00:22:16.381 { 00:22:16.381 "method": "bdev_nvme_set_keys", 00:22:16.381 "params": { 00:22:16.381 "name": "nvme0", 00:22:16.381 "dhchap_key": "key2", 00:22:16.381 "dhchap_ctrlr_key": "ckey1" 00:22:16.381 } 00:22:16.381 } 00:22:16.381 Got JSON-RPC error response 00:22:16.381 GoRPCClient: error on JSON-RPC call 00:22:16.381 11:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:16.381 11:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:22:16.381 11:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:16.381 11:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:16.381 11:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:16.381 11:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:22:16.381 11:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:22:16.381 11:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.381 11:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:16.381 11:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.381 11:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:22:16.381 11:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:22:17.316 11:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:22:17.316 11:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:22:17.316 11:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.316 11:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:17.316 11:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.316 11:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:22:17.316 11:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:22:17.316 11:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:22:17.316 11:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:22:17.316 11:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:17.316 11:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:22:17.316 11:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:17.316 11:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:22:17.316 11:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:17.316 11:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:17.316 rmmod nvme_tcp 00:22:17.316 rmmod nvme_fabrics 00:22:17.574 11:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:17.574 11:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:22:17.574 11:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:22:17.574 11:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 92027 ']' 00:22:17.574 11:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 92027 00:22:17.574 11:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 92027 ']' 00:22:17.574 11:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 92027 00:22:17.574 11:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:22:17.574 11:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:17.574 11:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92027 00:22:17.574 killing process with pid 92027 00:22:17.574 11:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:17.574 11:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:17.574 11:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92027' 00:22:17.574 11:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 92027 00:22:17.574 11:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 92027 00:22:17.574 11:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:17.574 11:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:17.574 11:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:17.574 11:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:22:17.574 11:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:22:17.574 11:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:22:17.574 11:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:17.574 11:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:17.574 11:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:17.574 11:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:17.574 11:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:17.574 11:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:17.574 11:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:17.832 11:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:17.832 11:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:17.832 11:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:17.832 11:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:17.832 11:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:17.832 11:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:17.833 11:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:17.833 11:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:17.833 11:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:17.833 11:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:17.833 11:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:17.833 11:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:17.833 11:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:17.833 11:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:22:17.833 11:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:22:17.833 11:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:22:17.833 11:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:22:17.833 11:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:22:17.833 11:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:22:17.833 11:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:17.833 11:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:22:17.833 11:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:22:17.833 11:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:17.833 11:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:22:17.833 11:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:22:17.833 11:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:18.399 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:18.658 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:22:18.658 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:22:18.658 11:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.H7h /tmp/spdk.key-null.j5k /tmp/spdk.key-sha256.W89 /tmp/spdk.key-sha384.hS4 /tmp/spdk.key-sha512.yKX /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:22:18.658 11:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:18.917 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:19.176 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:19.176 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:19.176 00:22:19.176 real 0m39.565s 00:22:19.176 user 0m35.061s 00:22:19.176 sys 0m3.533s 00:22:19.176 11:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:19.176 11:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.176 ************************************ 00:22:19.176 END TEST nvmf_auth_host 00:22:19.176 ************************************ 00:22:19.176 11:35:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:22:19.176 11:35:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:22:19.176 11:35:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:19.176 11:35:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:19.176 11:35:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.176 ************************************ 00:22:19.176 START TEST nvmf_digest 00:22:19.176 ************************************ 00:22:19.176 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:22:19.176 * Looking for test storage... 00:22:19.176 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:19.176 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:19.176 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:22:19.176 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:19.176 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:19.176 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:19.176 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:19.176 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:19.176 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:22:19.176 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:22:19.176 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:22:19.176 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:22:19.176 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:22:19.176 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:22:19.176 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:22:19.176 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:19.176 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:22:19.176 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:22:19.176 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:19.176 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:19.176 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:22:19.176 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:22:19.176 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:19.176 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:22:19.176 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:22:19.176 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:22:19.176 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:22:19.176 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:19.176 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:22:19.176 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:22:19.176 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:19.176 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:19.176 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:22:19.176 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:19.176 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:19.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:19.176 --rc genhtml_branch_coverage=1 00:22:19.176 --rc genhtml_function_coverage=1 00:22:19.176 --rc genhtml_legend=1 00:22:19.176 --rc geninfo_all_blocks=1 00:22:19.176 --rc geninfo_unexecuted_blocks=1 00:22:19.176 00:22:19.176 ' 00:22:19.176 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:19.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:19.176 --rc genhtml_branch_coverage=1 00:22:19.176 --rc genhtml_function_coverage=1 00:22:19.176 --rc genhtml_legend=1 00:22:19.176 --rc geninfo_all_blocks=1 00:22:19.176 --rc geninfo_unexecuted_blocks=1 00:22:19.176 00:22:19.176 ' 00:22:19.176 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:19.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:19.176 --rc genhtml_branch_coverage=1 00:22:19.176 --rc genhtml_function_coverage=1 00:22:19.176 --rc genhtml_legend=1 00:22:19.176 --rc geninfo_all_blocks=1 00:22:19.176 --rc geninfo_unexecuted_blocks=1 00:22:19.176 00:22:19.176 ' 00:22:19.176 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:19.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:19.176 --rc genhtml_branch_coverage=1 00:22:19.176 --rc genhtml_function_coverage=1 00:22:19.176 --rc genhtml_legend=1 00:22:19.176 --rc geninfo_all_blocks=1 00:22:19.176 --rc geninfo_unexecuted_blocks=1 00:22:19.176 00:22:19.176 ' 00:22:19.176 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:19.176 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:22:19.176 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:19.176 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:19.176 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:19.176 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:19.176 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:19.176 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:19.176 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:19.176 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:19.176 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:19.436 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:19.436 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:22:19.436 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=806d442c-08ba-4719-b76d-33169283459a 00:22:19.436 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:19.436 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:19.436 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:19.436 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:19.436 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:19.436 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:22:19.436 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:19.436 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:19.436 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:19.436 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.436 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.436 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.436 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:22:19.437 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.437 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:22:19.437 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:19.437 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:19.437 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:19.437 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:19.437 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:19.437 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:19.437 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:19.437 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:19.437 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:19.437 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:19.437 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:22:19.437 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:22:19.437 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:22:19.437 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:22:19.437 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:22:19.437 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:19.437 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:19.437 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:19.437 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:19.437 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:19.437 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:19.437 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:19.437 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:19.437 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:22:19.437 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:22:19.437 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:22:19.437 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:22:19.437 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:22:19.437 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@460 -- # nvmf_veth_init 00:22:19.437 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:19.437 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:19.437 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:19.437 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:19.437 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:19.437 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:19.437 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:19.437 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:19.437 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:19.437 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:19.437 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:19.437 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:19.437 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:19.437 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:19.437 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:19.437 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:19.437 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:19.437 Cannot find device "nvmf_init_br" 00:22:19.437 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:22:19.437 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:19.437 Cannot find device "nvmf_init_br2" 00:22:19.437 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:22:19.437 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:19.437 Cannot find device "nvmf_tgt_br" 00:22:19.437 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:22:19.437 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:19.437 Cannot find device "nvmf_tgt_br2" 00:22:19.437 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:22:19.437 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:19.437 Cannot find device "nvmf_init_br" 00:22:19.437 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:22:19.437 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:19.437 Cannot find device "nvmf_init_br2" 00:22:19.437 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:22:19.437 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:19.437 Cannot find device "nvmf_tgt_br" 00:22:19.437 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:22:19.437 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:19.437 Cannot find device "nvmf_tgt_br2" 00:22:19.437 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:22:19.437 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:19.437 Cannot find device "nvmf_br" 00:22:19.437 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:22:19.437 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:19.437 Cannot find device "nvmf_init_if" 00:22:19.437 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:22:19.437 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:19.437 Cannot find device "nvmf_init_if2" 00:22:19.437 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:22:19.437 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:19.437 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:19.437 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:22:19.437 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:19.437 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:19.437 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:22:19.437 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:19.437 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:19.437 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:19.437 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:19.437 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:19.437 11:35:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:19.437 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:19.697 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:19.697 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:19.697 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:19.697 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:19.697 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:19.697 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:19.697 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:19.697 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:19.697 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:19.697 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:19.697 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:19.697 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:19.697 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:19.697 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:19.697 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:19.697 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:19.697 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:19.697 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:19.697 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:19.697 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:19.697 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:19.697 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:19.697 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:19.697 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:19.697 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:19.698 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:19.698 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:19.698 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:22:19.698 00:22:19.698 --- 10.0.0.3 ping statistics --- 00:22:19.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:19.698 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:22:19.698 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:19.698 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:19.698 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms 00:22:19.698 00:22:19.698 --- 10.0.0.4 ping statistics --- 00:22:19.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:19.698 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:22:19.698 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:19.698 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:19.698 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:22:19.698 00:22:19.698 --- 10.0.0.1 ping statistics --- 00:22:19.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:19.698 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:22:19.698 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:19.698 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:19.698 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:22:19.698 00:22:19.698 --- 10.0.0.2 ping statistics --- 00:22:19.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:19.698 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:22:19.698 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:19.698 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@461 -- # return 0 00:22:19.698 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:19.698 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:19.698 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:19.698 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:19.698 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:19.698 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:19.698 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:19.698 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:19.698 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:22:19.698 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:22:19.698 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:19.698 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:19.698 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:22:19.698 ************************************ 00:22:19.698 START TEST nvmf_digest_clean 00:22:19.698 ************************************ 00:22:19.698 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:22:19.698 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:22:19.698 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:22:19.698 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:22:19.698 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:22:19.698 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:22:19.698 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:19.698 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:19.698 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:22:19.698 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=93704 00:22:19.698 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:22:19.698 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 93704 00:22:19.698 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 93704 ']' 00:22:19.698 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:19.698 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:19.698 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:19.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:19.698 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:19.698 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:22:19.957 [2024-11-20 11:35:05.291096] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:22:19.957 [2024-11-20 11:35:05.291188] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:19.957 [2024-11-20 11:35:05.437772] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:19.957 [2024-11-20 11:35:05.470238] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:19.957 [2024-11-20 11:35:05.470307] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:19.957 [2024-11-20 11:35:05.470332] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:19.957 [2024-11-20 11:35:05.470346] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:19.957 [2024-11-20 11:35:05.470357] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:19.957 [2024-11-20 11:35:05.470736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:19.957 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:19.957 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:22:19.957 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:19.957 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:19.957 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:22:20.215 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:20.215 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:22:20.215 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:22:20.215 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:22:20.215 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.215 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:22:20.215 null0 00:22:20.215 [2024-11-20 11:35:05.628694] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:20.215 [2024-11-20 11:35:05.652829] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:20.215 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.215 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:22:20.215 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:22:20.215 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:20.215 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:22:20.215 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:22:20.215 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:22:20.215 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:22:20.215 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93736 00:22:20.215 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93736 /var/tmp/bperf.sock 00:22:20.215 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:22:20.215 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 93736 ']' 00:22:20.215 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:20.215 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:20.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:20.215 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:20.215 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:20.215 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:22:20.215 [2024-11-20 11:35:05.724946] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:22:20.215 [2024-11-20 11:35:05.725080] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93736 ] 00:22:20.473 [2024-11-20 11:35:05.898303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:20.473 [2024-11-20 11:35:05.947959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:20.474 11:35:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:20.474 11:35:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:22:20.474 11:35:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:22:20.474 11:35:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:22:20.474 11:35:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:21.040 11:35:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:21.040 11:35:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:21.299 nvme0n1 00:22:21.299 11:35:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:22:21.299 11:35:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:21.557 Running I/O for 2 seconds... 00:22:23.427 14665.00 IOPS, 57.29 MiB/s [2024-11-20T11:35:09.275Z] 14489.00 IOPS, 56.60 MiB/s 00:22:23.686 Latency(us) 00:22:23.686 [2024-11-20T11:35:09.275Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:23.686 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:22:23.686 nvme0n1 : 2.00 14517.96 56.71 0.00 0.00 8805.85 4349.21 22520.55 00:22:23.686 [2024-11-20T11:35:09.275Z] =================================================================================================================== 00:22:23.686 [2024-11-20T11:35:09.275Z] Total : 14517.96 56.71 0.00 0.00 8805.85 4349.21 22520.55 00:22:23.686 { 00:22:23.686 "results": [ 00:22:23.686 { 00:22:23.686 "job": "nvme0n1", 00:22:23.686 "core_mask": "0x2", 00:22:23.686 "workload": "randread", 00:22:23.686 "status": "finished", 00:22:23.686 "queue_depth": 128, 00:22:23.686 "io_size": 4096, 00:22:23.686 "runtime": 2.004827, 00:22:23.686 "iops": 14517.960901364557, 00:22:23.686 "mibps": 56.7107847709553, 00:22:23.686 "io_failed": 0, 00:22:23.686 "io_timeout": 0, 00:22:23.686 "avg_latency_us": 8805.852188177383, 00:22:23.686 "min_latency_us": 4349.2072727272725, 00:22:23.686 "max_latency_us": 22520.552727272727 00:22:23.686 } 00:22:23.686 ], 00:22:23.686 "core_count": 1 00:22:23.686 } 00:22:23.686 11:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:22:23.686 11:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:22:23.686 11:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:23.686 11:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:23.686 | select(.opcode=="crc32c") 00:22:23.686 | "\(.module_name) \(.executed)"' 00:22:23.686 11:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:23.944 11:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:22:23.944 11:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:22:23.944 11:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:22:23.944 11:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:23.944 11:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93736 00:22:23.944 11:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 93736 ']' 00:22:23.944 11:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 93736 00:22:23.944 11:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:22:23.944 11:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:23.944 11:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93736 00:22:23.944 11:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:23.944 killing process with pid 93736 00:22:23.944 11:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:23.944 11:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93736' 00:22:23.944 11:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 93736 00:22:23.944 Received shutdown signal, test time was about 2.000000 seconds 00:22:23.944 00:22:23.944 Latency(us) 00:22:23.944 [2024-11-20T11:35:09.533Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:23.944 [2024-11-20T11:35:09.533Z] =================================================================================================================== 00:22:23.944 [2024-11-20T11:35:09.533Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:23.944 11:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 93736 00:22:24.203 11:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:22:24.203 11:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:22:24.203 11:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:24.203 11:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:22:24.203 11:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:22:24.203 11:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:22:24.203 11:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:22:24.203 11:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93816 00:22:24.203 11:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93816 /var/tmp/bperf.sock 00:22:24.203 11:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:22:24.203 11:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 93816 ']' 00:22:24.203 11:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:24.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:24.203 11:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:24.203 11:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:24.203 11:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:24.203 11:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:22:24.203 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:24.203 Zero copy mechanism will not be used. 00:22:24.203 [2024-11-20 11:35:09.717679] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:22:24.203 [2024-11-20 11:35:09.717819] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93816 ] 00:22:24.462 [2024-11-20 11:35:09.869783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:24.462 [2024-11-20 11:35:09.918464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:24.462 11:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:24.462 11:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:22:24.462 11:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:22:24.462 11:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:22:24.463 11:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:25.028 11:35:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:25.028 11:35:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:25.285 nvme0n1 00:22:25.285 11:35:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:22:25.285 11:35:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:25.543 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:25.543 Zero copy mechanism will not be used. 00:22:25.543 Running I/O for 2 seconds... 00:22:27.412 7263.00 IOPS, 907.88 MiB/s [2024-11-20T11:35:13.001Z] 7298.50 IOPS, 912.31 MiB/s 00:22:27.412 Latency(us) 00:22:27.412 [2024-11-20T11:35:13.001Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:27.412 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:22:27.412 nvme0n1 : 2.00 7294.75 911.84 0.00 0.00 2189.43 666.53 6642.97 00:22:27.412 [2024-11-20T11:35:13.001Z] =================================================================================================================== 00:22:27.412 [2024-11-20T11:35:13.001Z] Total : 7294.75 911.84 0.00 0.00 2189.43 666.53 6642.97 00:22:27.412 { 00:22:27.412 "results": [ 00:22:27.412 { 00:22:27.412 "job": "nvme0n1", 00:22:27.412 "core_mask": "0x2", 00:22:27.412 "workload": "randread", 00:22:27.412 "status": "finished", 00:22:27.412 "queue_depth": 16, 00:22:27.412 "io_size": 131072, 00:22:27.412 "runtime": 2.003222, 00:22:27.412 "iops": 7294.748160713091, 00:22:27.412 "mibps": 911.8435200891364, 00:22:27.412 "io_failed": 0, 00:22:27.412 "io_timeout": 0, 00:22:27.412 "avg_latency_us": 2189.4269605519366, 00:22:27.412 "min_latency_us": 666.5309090909091, 00:22:27.412 "max_latency_us": 6642.967272727273 00:22:27.412 } 00:22:27.412 ], 00:22:27.412 "core_count": 1 00:22:27.412 } 00:22:27.412 11:35:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:22:27.412 11:35:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:22:27.412 11:35:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:27.412 11:35:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:27.412 11:35:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:27.412 | select(.opcode=="crc32c") 00:22:27.412 | "\(.module_name) \(.executed)"' 00:22:27.979 11:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:22:27.979 11:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:22:27.979 11:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:22:27.979 11:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:27.979 11:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93816 00:22:27.979 11:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 93816 ']' 00:22:27.979 11:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 93816 00:22:27.979 11:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:22:27.979 11:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:27.979 11:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93816 00:22:27.979 11:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:27.979 11:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:27.979 killing process with pid 93816 00:22:27.979 11:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93816' 00:22:27.980 Received shutdown signal, test time was about 2.000000 seconds 00:22:27.980 00:22:27.980 Latency(us) 00:22:27.980 [2024-11-20T11:35:13.569Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:27.980 [2024-11-20T11:35:13.569Z] =================================================================================================================== 00:22:27.980 [2024-11-20T11:35:13.569Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:27.980 11:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 93816 00:22:27.980 11:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 93816 00:22:27.980 11:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:22:27.980 11:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:22:27.980 11:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:27.980 11:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:22:27.980 11:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:22:27.980 11:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:22:27.980 11:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:22:27.980 11:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93893 00:22:27.980 11:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93893 /var/tmp/bperf.sock 00:22:27.980 11:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:22:27.980 11:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 93893 ']' 00:22:27.980 11:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:27.980 11:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:27.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:27.980 11:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:27.980 11:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:27.980 11:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:22:28.238 [2024-11-20 11:35:13.597709] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:22:28.238 [2024-11-20 11:35:13.597850] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93893 ] 00:22:28.238 [2024-11-20 11:35:13.749945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:28.238 [2024-11-20 11:35:13.783962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:28.496 11:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:28.496 11:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:22:28.496 11:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:22:28.496 11:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:22:28.496 11:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:28.754 11:35:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:28.754 11:35:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:29.011 nvme0n1 00:22:29.270 11:35:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:22:29.270 11:35:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:29.270 Running I/O for 2 seconds... 00:22:31.579 18889.00 IOPS, 73.79 MiB/s [2024-11-20T11:35:17.168Z] 18719.50 IOPS, 73.12 MiB/s 00:22:31.579 Latency(us) 00:22:31.579 [2024-11-20T11:35:17.168Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:31.579 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:31.579 nvme0n1 : 2.00 18755.58 73.26 0.00 0.00 6817.71 2576.76 16681.89 00:22:31.579 [2024-11-20T11:35:17.168Z] =================================================================================================================== 00:22:31.579 [2024-11-20T11:35:17.168Z] Total : 18755.58 73.26 0.00 0.00 6817.71 2576.76 16681.89 00:22:31.579 { 00:22:31.579 "results": [ 00:22:31.579 { 00:22:31.579 "job": "nvme0n1", 00:22:31.579 "core_mask": "0x2", 00:22:31.579 "workload": "randwrite", 00:22:31.579 "status": "finished", 00:22:31.579 "queue_depth": 128, 00:22:31.579 "io_size": 4096, 00:22:31.579 "runtime": 2.002977, 00:22:31.579 "iops": 18755.582315723048, 00:22:31.579 "mibps": 73.26399342079316, 00:22:31.579 "io_failed": 0, 00:22:31.579 "io_timeout": 0, 00:22:31.579 "avg_latency_us": 6817.7096660754005, 00:22:31.579 "min_latency_us": 2576.756363636364, 00:22:31.579 "max_latency_us": 16681.890909090907 00:22:31.579 } 00:22:31.579 ], 00:22:31.579 "core_count": 1 00:22:31.579 } 00:22:31.579 11:35:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:22:31.579 11:35:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:22:31.579 11:35:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:31.579 11:35:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:31.579 | select(.opcode=="crc32c") 00:22:31.579 | "\(.module_name) \(.executed)"' 00:22:31.579 11:35:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:31.579 11:35:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:22:31.579 11:35:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:22:31.579 11:35:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:22:31.579 11:35:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:31.579 11:35:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93893 00:22:31.579 11:35:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 93893 ']' 00:22:31.579 11:35:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 93893 00:22:31.579 11:35:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:22:31.579 11:35:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:31.579 11:35:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93893 00:22:31.579 11:35:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:31.579 killing process with pid 93893 00:22:31.579 11:35:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:31.579 11:35:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93893' 00:22:31.579 Received shutdown signal, test time was about 2.000000 seconds 00:22:31.579 00:22:31.579 Latency(us) 00:22:31.579 [2024-11-20T11:35:17.168Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:31.579 [2024-11-20T11:35:17.168Z] =================================================================================================================== 00:22:31.579 [2024-11-20T11:35:17.168Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:31.580 11:35:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 93893 00:22:31.580 11:35:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 93893 00:22:31.838 11:35:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:22:31.838 11:35:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:22:31.838 11:35:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:31.838 11:35:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:22:31.838 11:35:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:22:31.838 11:35:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:22:31.838 11:35:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:22:31.838 11:35:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93965 00:22:31.838 11:35:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93965 /var/tmp/bperf.sock 00:22:31.838 11:35:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:22:31.838 11:35:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 93965 ']' 00:22:31.838 11:35:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:31.838 11:35:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:31.838 11:35:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:31.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:31.838 11:35:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:31.838 11:35:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:22:31.838 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:31.838 Zero copy mechanism will not be used. 00:22:31.838 [2024-11-20 11:35:17.330296] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:22:31.838 [2024-11-20 11:35:17.330382] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93965 ] 00:22:32.097 [2024-11-20 11:35:17.476629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:32.097 [2024-11-20 11:35:17.510948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:32.097 11:35:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:32.097 11:35:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:22:32.097 11:35:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:22:32.097 11:35:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:22:32.097 11:35:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:32.355 11:35:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:32.355 11:35:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:32.922 nvme0n1 00:22:32.922 11:35:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:22:32.922 11:35:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:32.922 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:32.922 Zero copy mechanism will not be used. 00:22:32.922 Running I/O for 2 seconds... 00:22:34.793 6185.00 IOPS, 773.12 MiB/s [2024-11-20T11:35:20.382Z] 6408.00 IOPS, 801.00 MiB/s 00:22:34.793 Latency(us) 00:22:34.793 [2024-11-20T11:35:20.382Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:34.793 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:22:34.793 nvme0n1 : 2.00 6405.57 800.70 0.00 0.00 2491.75 1876.71 8877.15 00:22:34.793 [2024-11-20T11:35:20.382Z] =================================================================================================================== 00:22:34.793 [2024-11-20T11:35:20.382Z] Total : 6405.57 800.70 0.00 0.00 2491.75 1876.71 8877.15 00:22:34.793 { 00:22:34.793 "results": [ 00:22:34.793 { 00:22:34.793 "job": "nvme0n1", 00:22:34.793 "core_mask": "0x2", 00:22:34.793 "workload": "randwrite", 00:22:34.793 "status": "finished", 00:22:34.793 "queue_depth": 16, 00:22:34.793 "io_size": 131072, 00:22:34.793 "runtime": 2.003257, 00:22:34.793 "iops": 6405.568531646214, 00:22:34.793 "mibps": 800.6960664557768, 00:22:34.793 "io_failed": 0, 00:22:34.793 "io_timeout": 0, 00:22:34.793 "avg_latency_us": 2491.746198141011, 00:22:34.793 "min_latency_us": 1876.7127272727273, 00:22:34.793 "max_latency_us": 8877.149090909092 00:22:34.793 } 00:22:34.793 ], 00:22:34.793 "core_count": 1 00:22:34.793 } 00:22:34.793 11:35:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:22:34.793 11:35:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:22:34.793 11:35:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:34.793 11:35:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:34.793 | select(.opcode=="crc32c") 00:22:34.793 | "\(.module_name) \(.executed)"' 00:22:34.793 11:35:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:35.361 11:35:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:22:35.361 11:35:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:22:35.361 11:35:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:22:35.361 11:35:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:35.361 11:35:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93965 00:22:35.361 11:35:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 93965 ']' 00:22:35.361 11:35:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 93965 00:22:35.361 11:35:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:22:35.361 11:35:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:35.361 11:35:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93965 00:22:35.361 killing process with pid 93965 00:22:35.361 Received shutdown signal, test time was about 2.000000 seconds 00:22:35.361 00:22:35.361 Latency(us) 00:22:35.361 [2024-11-20T11:35:20.950Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:35.361 [2024-11-20T11:35:20.950Z] =================================================================================================================== 00:22:35.361 [2024-11-20T11:35:20.950Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:35.361 11:35:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:35.361 11:35:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:35.361 11:35:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93965' 00:22:35.361 11:35:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 93965 00:22:35.361 11:35:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 93965 00:22:35.361 11:35:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 93704 00:22:35.361 11:35:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 93704 ']' 00:22:35.361 11:35:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 93704 00:22:35.361 11:35:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:22:35.361 11:35:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:35.361 11:35:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93704 00:22:35.361 killing process with pid 93704 00:22:35.361 11:35:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:35.361 11:35:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:35.361 11:35:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93704' 00:22:35.361 11:35:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 93704 00:22:35.361 11:35:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 93704 00:22:35.620 ************************************ 00:22:35.620 END TEST nvmf_digest_clean 00:22:35.620 ************************************ 00:22:35.620 00:22:35.620 real 0m15.798s 00:22:35.620 user 0m31.412s 00:22:35.620 sys 0m4.169s 00:22:35.620 11:35:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:35.620 11:35:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:22:35.620 11:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:22:35.620 11:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:35.620 11:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:35.620 11:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:22:35.620 ************************************ 00:22:35.620 START TEST nvmf_digest_error 00:22:35.620 ************************************ 00:22:35.621 11:35:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:22:35.621 11:35:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:22:35.621 11:35:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:35.621 11:35:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:35.621 11:35:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:35.621 11:35:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=94065 00:22:35.621 11:35:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 94065 00:22:35.621 11:35:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:22:35.621 11:35:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 94065 ']' 00:22:35.621 11:35:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:35.621 11:35:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:35.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:35.621 11:35:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:35.621 11:35:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:35.621 11:35:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:35.621 [2024-11-20 11:35:21.134014] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:22:35.621 [2024-11-20 11:35:21.134108] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:35.880 [2024-11-20 11:35:21.282368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:35.880 [2024-11-20 11:35:21.314637] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:35.880 [2024-11-20 11:35:21.314702] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:35.880 [2024-11-20 11:35:21.314725] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:35.880 [2024-11-20 11:35:21.314734] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:35.880 [2024-11-20 11:35:21.314741] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:35.880 [2024-11-20 11:35:21.315053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:35.880 11:35:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:35.880 11:35:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:22:35.880 11:35:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:35.880 11:35:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:35.880 11:35:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:35.880 11:35:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:35.880 11:35:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:22:35.880 11:35:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.880 11:35:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:35.880 [2024-11-20 11:35:21.407476] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:22:35.880 11:35:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.880 11:35:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:22:35.880 11:35:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:22:35.880 11:35:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.880 11:35:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:36.139 null0 00:22:36.139 [2024-11-20 11:35:21.480285] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:36.139 [2024-11-20 11:35:21.504444] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:36.139 11:35:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.139 11:35:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:22:36.139 11:35:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:22:36.139 11:35:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:22:36.139 11:35:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:22:36.139 11:35:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:22:36.139 11:35:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=94097 00:22:36.139 11:35:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:22:36.139 11:35:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 94097 /var/tmp/bperf.sock 00:22:36.139 11:35:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 94097 ']' 00:22:36.139 11:35:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:36.139 11:35:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:36.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:36.139 11:35:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:36.139 11:35:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:36.139 11:35:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:36.139 [2024-11-20 11:35:21.575441] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:22:36.139 [2024-11-20 11:35:21.575557] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94097 ] 00:22:36.397 [2024-11-20 11:35:21.733592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:36.397 [2024-11-20 11:35:21.774673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:37.331 11:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:37.331 11:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:22:37.331 11:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:37.331 11:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:37.675 11:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:37.675 11:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.675 11:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:37.675 11:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.675 11:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:37.675 11:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:37.934 nvme0n1 00:22:37.934 11:35:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:22:37.934 11:35:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.934 11:35:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:37.934 11:35:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.934 11:35:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:37.934 11:35:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:38.193 Running I/O for 2 seconds... 00:22:38.193 [2024-11-20 11:35:23.542159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:38.193 [2024-11-20 11:35:23.542225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:5271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.193 [2024-11-20 11:35:23.542241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.193 [2024-11-20 11:35:23.555005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:38.193 [2024-11-20 11:35:23.555063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:23091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.193 [2024-11-20 11:35:23.555079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.193 [2024-11-20 11:35:23.570352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:38.193 [2024-11-20 11:35:23.570432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:15990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.193 [2024-11-20 11:35:23.570457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.193 [2024-11-20 11:35:23.586219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:38.193 [2024-11-20 11:35:23.586292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.193 [2024-11-20 11:35:23.586309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.193 [2024-11-20 11:35:23.600453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:38.193 [2024-11-20 11:35:23.600516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:13387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.193 [2024-11-20 11:35:23.600533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.193 [2024-11-20 11:35:23.614672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:38.193 [2024-11-20 11:35:23.614739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.193 [2024-11-20 11:35:23.614754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.193 [2024-11-20 11:35:23.627950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:38.193 [2024-11-20 11:35:23.628014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.194 [2024-11-20 11:35:23.628030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.194 [2024-11-20 11:35:23.644812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:38.194 [2024-11-20 11:35:23.644876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:5345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.194 [2024-11-20 11:35:23.644892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.194 [2024-11-20 11:35:23.660366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:38.194 [2024-11-20 11:35:23.660428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:18460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.194 [2024-11-20 11:35:23.660444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.194 [2024-11-20 11:35:23.674640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:38.194 [2024-11-20 11:35:23.674696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.194 [2024-11-20 11:35:23.674711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.194 [2024-11-20 11:35:23.689309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:38.194 [2024-11-20 11:35:23.689377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.194 [2024-11-20 11:35:23.689393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.194 [2024-11-20 11:35:23.704681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:38.194 [2024-11-20 11:35:23.704742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:11346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.194 [2024-11-20 11:35:23.704758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.194 [2024-11-20 11:35:23.716485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:38.194 [2024-11-20 11:35:23.716550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:3604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.194 [2024-11-20 11:35:23.716566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.194 [2024-11-20 11:35:23.730792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:38.194 [2024-11-20 11:35:23.730855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:22228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.194 [2024-11-20 11:35:23.730872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.194 [2024-11-20 11:35:23.746634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:38.194 [2024-11-20 11:35:23.746698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:7745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.194 [2024-11-20 11:35:23.746713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.194 [2024-11-20 11:35:23.759007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:38.194 [2024-11-20 11:35:23.759067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:13028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.194 [2024-11-20 11:35:23.759083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.194 [2024-11-20 11:35:23.773812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:38.194 [2024-11-20 11:35:23.773882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:18731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.194 [2024-11-20 11:35:23.773897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.453 [2024-11-20 11:35:23.791122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:38.453 [2024-11-20 11:35:23.791194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:24893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.453 [2024-11-20 11:35:23.791211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.453 [2024-11-20 11:35:23.805494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:38.453 [2024-11-20 11:35:23.805559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:9621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.453 [2024-11-20 11:35:23.805575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.453 [2024-11-20 11:35:23.817870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:38.453 [2024-11-20 11:35:23.817932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:23346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.453 [2024-11-20 11:35:23.817948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.453 [2024-11-20 11:35:23.833066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:38.453 [2024-11-20 11:35:23.833131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:1869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.453 [2024-11-20 11:35:23.833148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.453 [2024-11-20 11:35:23.848847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:38.453 [2024-11-20 11:35:23.848911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:16732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.453 [2024-11-20 11:35:23.848927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.453 [2024-11-20 11:35:23.862680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:38.453 [2024-11-20 11:35:23.862743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:25441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.453 [2024-11-20 11:35:23.862758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.453 [2024-11-20 11:35:23.877972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:38.453 [2024-11-20 11:35:23.878038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:11595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.453 [2024-11-20 11:35:23.878054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.453 [2024-11-20 11:35:23.892524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:38.453 [2024-11-20 11:35:23.892584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.453 [2024-11-20 11:35:23.892600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.453 [2024-11-20 11:35:23.907809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:38.453 [2024-11-20 11:35:23.907878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:16949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.453 [2024-11-20 11:35:23.907893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.453 [2024-11-20 11:35:23.921330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:38.453 [2024-11-20 11:35:23.921394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:19257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.453 [2024-11-20 11:35:23.921410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.453 [2024-11-20 11:35:23.935311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:38.453 [2024-11-20 11:35:23.935382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.453 [2024-11-20 11:35:23.935397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.453 [2024-11-20 11:35:23.950712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:38.453 [2024-11-20 11:35:23.950789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:23298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.453 [2024-11-20 11:35:23.950807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.453 [2024-11-20 11:35:23.966000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:38.453 [2024-11-20 11:35:23.966067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:13632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.453 [2024-11-20 11:35:23.966083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.453 [2024-11-20 11:35:23.981079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:38.453 [2024-11-20 11:35:23.981148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:20123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.453 [2024-11-20 11:35:23.981163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.453 [2024-11-20 11:35:23.994752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:38.453 [2024-11-20 11:35:23.994841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.453 [2024-11-20 11:35:23.994863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.453 [2024-11-20 11:35:24.009372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:38.453 [2024-11-20 11:35:24.009436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:8473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.453 [2024-11-20 11:35:24.009452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.453 [2024-11-20 11:35:24.023120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:38.453 [2024-11-20 11:35:24.023178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:4191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.453 [2024-11-20 11:35:24.023193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.453 [2024-11-20 11:35:24.037573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:38.453 [2024-11-20 11:35:24.037654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:9977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.453 [2024-11-20 11:35:24.037670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.712 [2024-11-20 11:35:24.052197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:38.713 [2024-11-20 11:35:24.052276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.713 [2024-11-20 11:35:24.052292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.713 [2024-11-20 11:35:24.066672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:38.713 [2024-11-20 11:35:24.066735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:1856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.713 [2024-11-20 11:35:24.066751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.713 [2024-11-20 11:35:24.079639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:38.713 [2024-11-20 11:35:24.079724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:15340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.713 [2024-11-20 11:35:24.079741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.713 [2024-11-20 11:35:24.096031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:38.713 [2024-11-20 11:35:24.096106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:12214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.713 [2024-11-20 11:35:24.096122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.713 [2024-11-20 11:35:24.111483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:38.713 [2024-11-20 11:35:24.111553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:16113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.713 [2024-11-20 11:35:24.111570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.713 [2024-11-20 11:35:24.126513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:38.713 [2024-11-20 11:35:24.126589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:10256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.713 [2024-11-20 11:35:24.126604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.713 [2024-11-20 11:35:24.141585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:38.713 [2024-11-20 11:35:24.141687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:10408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.713 [2024-11-20 11:35:24.141706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.713 [2024-11-20 11:35:24.155485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:38.713 [2024-11-20 11:35:24.155556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.713 [2024-11-20 11:35:24.155572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.713 [2024-11-20 11:35:24.170156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:38.713 [2024-11-20 11:35:24.170228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:15639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.713 [2024-11-20 11:35:24.170244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.713 [2024-11-20 11:35:24.184605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:38.713 [2024-11-20 11:35:24.184686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.713 [2024-11-20 11:35:24.184701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.713 [2024-11-20 11:35:24.199001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:38.713 [2024-11-20 11:35:24.199068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:18891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.713 [2024-11-20 11:35:24.199084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.713 [2024-11-20 11:35:24.215930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:38.713 [2024-11-20 11:35:24.216013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.713 [2024-11-20 11:35:24.216041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.713 [2024-11-20 11:35:24.228487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:38.713 [2024-11-20 11:35:24.228558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:18313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.713 [2024-11-20 11:35:24.228573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.713 [2024-11-20 11:35:24.242852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:38.713 [2024-11-20 11:35:24.242920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:3265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.713 [2024-11-20 11:35:24.242937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.713 [2024-11-20 11:35:24.258417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:38.713 [2024-11-20 11:35:24.258483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:4858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.713 [2024-11-20 11:35:24.258499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.713 [2024-11-20 11:35:24.273034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:38.713 [2024-11-20 11:35:24.273099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:20739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.713 [2024-11-20 11:35:24.273117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.713 [2024-11-20 11:35:24.289362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:38.713 [2024-11-20 11:35:24.289447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:16483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.713 [2024-11-20 11:35:24.289464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.973 [2024-11-20 11:35:24.301513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:38.973 [2024-11-20 11:35:24.301587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:18566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.973 [2024-11-20 11:35:24.301604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.973 [2024-11-20 11:35:24.317098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:38.973 [2024-11-20 11:35:24.317198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:16971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.973 [2024-11-20 11:35:24.317222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.973 [2024-11-20 11:35:24.330793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:38.973 [2024-11-20 11:35:24.330863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:18889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.973 [2024-11-20 11:35:24.330879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.973 [2024-11-20 11:35:24.346932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:38.973 [2024-11-20 11:35:24.347029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.973 [2024-11-20 11:35:24.347056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.973 [2024-11-20 11:35:24.361816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:38.973 [2024-11-20 11:35:24.361881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:21177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.973 [2024-11-20 11:35:24.361896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.973 [2024-11-20 11:35:24.376220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:38.973 [2024-11-20 11:35:24.376303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:16963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.973 [2024-11-20 11:35:24.376320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.973 [2024-11-20 11:35:24.390644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:38.973 [2024-11-20 11:35:24.390708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:14052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.973 [2024-11-20 11:35:24.390724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.973 [2024-11-20 11:35:24.407564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:38.973 [2024-11-20 11:35:24.407641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.973 [2024-11-20 11:35:24.407659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.973 [2024-11-20 11:35:24.422637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:38.973 [2024-11-20 11:35:24.422704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.973 [2024-11-20 11:35:24.422720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.973 [2024-11-20 11:35:24.437433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:38.973 [2024-11-20 11:35:24.437503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:22486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.973 [2024-11-20 11:35:24.437520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.973 [2024-11-20 11:35:24.451849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:38.973 [2024-11-20 11:35:24.451944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.973 [2024-11-20 11:35:24.451967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.973 [2024-11-20 11:35:24.464980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:38.973 [2024-11-20 11:35:24.465048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.973 [2024-11-20 11:35:24.465064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.973 [2024-11-20 11:35:24.480856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:38.973 [2024-11-20 11:35:24.480918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:23684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.974 [2024-11-20 11:35:24.480934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.974 [2024-11-20 11:35:24.495880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:38.974 [2024-11-20 11:35:24.495944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:1132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.974 [2024-11-20 11:35:24.495961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.974 [2024-11-20 11:35:24.510352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:38.974 [2024-11-20 11:35:24.510418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:7698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.974 [2024-11-20 11:35:24.510433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.974 17311.00 IOPS, 67.62 MiB/s [2024-11-20T11:35:24.563Z] [2024-11-20 11:35:24.526953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:38.974 [2024-11-20 11:35:24.527006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:24192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.974 [2024-11-20 11:35:24.527022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.974 [2024-11-20 11:35:24.540807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:38.974 [2024-11-20 11:35:24.540871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:16253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.974 [2024-11-20 11:35:24.540888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.974 [2024-11-20 11:35:24.557501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:38.974 [2024-11-20 11:35:24.557566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.974 [2024-11-20 11:35:24.557582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.233 [2024-11-20 11:35:24.571777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:39.233 [2024-11-20 11:35:24.571835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:14328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.233 [2024-11-20 11:35:24.571851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.233 [2024-11-20 11:35:24.585970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:39.233 [2024-11-20 11:35:24.586032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:1905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.233 [2024-11-20 11:35:24.586047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.233 [2024-11-20 11:35:24.600190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:39.233 [2024-11-20 11:35:24.600252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.233 [2024-11-20 11:35:24.600269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.233 [2024-11-20 11:35:24.615324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:39.233 [2024-11-20 11:35:24.615418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:14948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.233 [2024-11-20 11:35:24.615445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.233 [2024-11-20 11:35:24.628935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:39.233 [2024-11-20 11:35:24.629006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:10714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.233 [2024-11-20 11:35:24.629031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.233 [2024-11-20 11:35:24.645148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:39.233 [2024-11-20 11:35:24.645225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:24490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.233 [2024-11-20 11:35:24.645249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.233 [2024-11-20 11:35:24.659140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:39.233 [2024-11-20 11:35:24.659217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:15897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.233 [2024-11-20 11:35:24.659244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.233 [2024-11-20 11:35:24.673525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:39.233 [2024-11-20 11:35:24.673608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.233 [2024-11-20 11:35:24.673649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.233 [2024-11-20 11:35:24.688295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:39.233 [2024-11-20 11:35:24.688361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:19578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.233 [2024-11-20 11:35:24.688386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.233 [2024-11-20 11:35:24.702188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:39.233 [2024-11-20 11:35:24.702259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:2396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.233 [2024-11-20 11:35:24.702282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.233 [2024-11-20 11:35:24.717380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:39.233 [2024-11-20 11:35:24.717460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:18623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.233 [2024-11-20 11:35:24.717484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.233 [2024-11-20 11:35:24.732698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:39.233 [2024-11-20 11:35:24.732776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:4118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.233 [2024-11-20 11:35:24.732802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.233 [2024-11-20 11:35:24.746973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:39.233 [2024-11-20 11:35:24.747043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:22664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.233 [2024-11-20 11:35:24.747065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.233 [2024-11-20 11:35:24.763199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:39.233 [2024-11-20 11:35:24.763269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:8738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.233 [2024-11-20 11:35:24.763295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.233 [2024-11-20 11:35:24.778001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:39.233 [2024-11-20 11:35:24.778068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.233 [2024-11-20 11:35:24.778091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.233 [2024-11-20 11:35:24.792052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:39.233 [2024-11-20 11:35:24.792131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:6137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.233 [2024-11-20 11:35:24.792158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.233 [2024-11-20 11:35:24.805850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:39.233 [2024-11-20 11:35:24.805926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.233 [2024-11-20 11:35:24.805951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.492 [2024-11-20 11:35:24.820412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:39.492 [2024-11-20 11:35:24.820490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:3338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.492 [2024-11-20 11:35:24.820514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.492 [2024-11-20 11:35:24.837788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:39.492 [2024-11-20 11:35:24.837877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:3912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.492 [2024-11-20 11:35:24.837904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.492 [2024-11-20 11:35:24.853248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:39.492 [2024-11-20 11:35:24.853362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:2158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.492 [2024-11-20 11:35:24.853386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.492 [2024-11-20 11:35:24.874452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:39.492 [2024-11-20 11:35:24.874526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:9063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.492 [2024-11-20 11:35:24.874549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.492 [2024-11-20 11:35:24.889919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:39.492 [2024-11-20 11:35:24.889985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:12434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.492 [2024-11-20 11:35:24.890007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.492 [2024-11-20 11:35:24.908590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:39.492 [2024-11-20 11:35:24.908676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:25039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.492 [2024-11-20 11:35:24.908700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.492 [2024-11-20 11:35:24.925176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:39.492 [2024-11-20 11:35:24.925253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.492 [2024-11-20 11:35:24.925276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.492 [2024-11-20 11:35:24.943336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:39.492 [2024-11-20 11:35:24.943409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:15690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.492 [2024-11-20 11:35:24.943433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.492 [2024-11-20 11:35:24.959013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:39.492 [2024-11-20 11:35:24.959090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:15527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.492 [2024-11-20 11:35:24.959116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.492 [2024-11-20 11:35:24.976564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:39.492 [2024-11-20 11:35:24.976659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:4564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.492 [2024-11-20 11:35:24.976695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.492 [2024-11-20 11:35:24.993996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:39.492 [2024-11-20 11:35:24.994076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:19800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.492 [2024-11-20 11:35:24.994100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.492 [2024-11-20 11:35:25.010172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:39.492 [2024-11-20 11:35:25.010249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:2756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.492 [2024-11-20 11:35:25.010275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.492 [2024-11-20 11:35:25.028339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:39.492 [2024-11-20 11:35:25.028427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:17575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.492 [2024-11-20 11:35:25.028452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.492 [2024-11-20 11:35:25.046524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:39.492 [2024-11-20 11:35:25.046606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.492 [2024-11-20 11:35:25.046650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.492 [2024-11-20 11:35:25.061666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:39.492 [2024-11-20 11:35:25.061761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:3591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.492 [2024-11-20 11:35:25.061787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.751 [2024-11-20 11:35:25.079219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:39.751 [2024-11-20 11:35:25.079298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:23090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.751 [2024-11-20 11:35:25.079322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.751 [2024-11-20 11:35:25.095284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:39.751 [2024-11-20 11:35:25.095367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:18783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.751 [2024-11-20 11:35:25.095394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.751 [2024-11-20 11:35:25.113022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:39.751 [2024-11-20 11:35:25.113101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:21998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.751 [2024-11-20 11:35:25.113124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.751 [2024-11-20 11:35:25.128211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:39.751 [2024-11-20 11:35:25.128298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:12223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.751 [2024-11-20 11:35:25.128327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.751 [2024-11-20 11:35:25.144160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:39.751 [2024-11-20 11:35:25.144227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:3561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.751 [2024-11-20 11:35:25.144243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.751 [2024-11-20 11:35:25.156757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:39.751 [2024-11-20 11:35:25.156825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:21261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.751 [2024-11-20 11:35:25.156840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.751 [2024-11-20 11:35:25.171493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:39.752 [2024-11-20 11:35:25.171556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:4134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.752 [2024-11-20 11:35:25.171572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.752 [2024-11-20 11:35:25.186368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:39.752 [2024-11-20 11:35:25.186436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:7351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.752 [2024-11-20 11:35:25.186452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.752 [2024-11-20 11:35:25.201475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:39.752 [2024-11-20 11:35:25.201531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.752 [2024-11-20 11:35:25.201547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.752 [2024-11-20 11:35:25.216669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:39.752 [2024-11-20 11:35:25.216741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:16222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.752 [2024-11-20 11:35:25.216757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.752 [2024-11-20 11:35:25.231208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:39.752 [2024-11-20 11:35:25.231281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:14371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.752 [2024-11-20 11:35:25.231297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.752 [2024-11-20 11:35:25.245519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:39.752 [2024-11-20 11:35:25.245579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:9105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.752 [2024-11-20 11:35:25.245595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.752 [2024-11-20 11:35:25.259775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:39.752 [2024-11-20 11:35:25.259831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:14116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.752 [2024-11-20 11:35:25.259846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.752 [2024-11-20 11:35:25.270559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:39.752 [2024-11-20 11:35:25.270636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:22384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.752 [2024-11-20 11:35:25.270660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.752 [2024-11-20 11:35:25.286978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:39.752 [2024-11-20 11:35:25.287044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:19021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.752 [2024-11-20 11:35:25.287060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.752 [2024-11-20 11:35:25.301305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:39.752 [2024-11-20 11:35:25.301363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:2405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.752 [2024-11-20 11:35:25.301378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.752 [2024-11-20 11:35:25.316168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:39.752 [2024-11-20 11:35:25.316223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.752 [2024-11-20 11:35:25.316238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.752 [2024-11-20 11:35:25.330963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:39.752 [2024-11-20 11:35:25.331018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:15824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.752 [2024-11-20 11:35:25.331033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.011 [2024-11-20 11:35:25.345980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:40.011 [2024-11-20 11:35:25.346046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:15358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.011 [2024-11-20 11:35:25.346062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.011 [2024-11-20 11:35:25.361433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:40.011 [2024-11-20 11:35:25.361492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:21432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.011 [2024-11-20 11:35:25.361508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.011 [2024-11-20 11:35:25.375478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:40.011 [2024-11-20 11:35:25.375531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:21441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.011 [2024-11-20 11:35:25.375545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.011 [2024-11-20 11:35:25.387952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:40.011 [2024-11-20 11:35:25.388021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:16508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.011 [2024-11-20 11:35:25.388038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.011 [2024-11-20 11:35:25.403094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:40.011 [2024-11-20 11:35:25.403154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:15744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.011 [2024-11-20 11:35:25.403169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.011 [2024-11-20 11:35:25.417502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:40.011 [2024-11-20 11:35:25.417562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:1664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.011 [2024-11-20 11:35:25.417578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.011 [2024-11-20 11:35:25.432221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:40.011 [2024-11-20 11:35:25.432273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:4469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.011 [2024-11-20 11:35:25.432289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.011 [2024-11-20 11:35:25.447093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:40.011 [2024-11-20 11:35:25.447148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:6881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.011 [2024-11-20 11:35:25.447163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.011 [2024-11-20 11:35:25.462548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:40.011 [2024-11-20 11:35:25.462602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:11276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.011 [2024-11-20 11:35:25.462632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.011 [2024-11-20 11:35:25.476581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:40.011 [2024-11-20 11:35:25.476648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.011 [2024-11-20 11:35:25.476663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.011 [2024-11-20 11:35:25.492117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:40.011 [2024-11-20 11:35:25.492176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:20490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.011 [2024-11-20 11:35:25.492191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.011 [2024-11-20 11:35:25.506348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:40.011 [2024-11-20 11:35:25.506404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:3037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.011 [2024-11-20 11:35:25.506419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.011 [2024-11-20 11:35:25.520419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb88540) 00:22:40.011 [2024-11-20 11:35:25.520472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:13259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.011 [2024-11-20 11:35:25.520487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.011 16981.50 IOPS, 66.33 MiB/s 00:22:40.011 Latency(us) 00:22:40.011 [2024-11-20T11:35:25.600Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:40.011 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:22:40.011 nvme0n1 : 2.00 17005.84 66.43 0.00 0.00 7518.80 3649.16 21805.61 00:22:40.011 [2024-11-20T11:35:25.600Z] =================================================================================================================== 00:22:40.011 [2024-11-20T11:35:25.600Z] Total : 17005.84 66.43 0.00 0.00 7518.80 3649.16 21805.61 00:22:40.011 { 00:22:40.011 "results": [ 00:22:40.011 { 00:22:40.011 "job": "nvme0n1", 00:22:40.011 "core_mask": "0x2", 00:22:40.011 "workload": "randread", 00:22:40.011 "status": "finished", 00:22:40.011 "queue_depth": 128, 00:22:40.011 "io_size": 4096, 00:22:40.011 "runtime": 2.004664, 00:22:40.011 "iops": 17005.84237558015, 00:22:40.011 "mibps": 66.42907177960996, 00:22:40.011 "io_failed": 0, 00:22:40.011 "io_timeout": 0, 00:22:40.011 "avg_latency_us": 7518.804206708782, 00:22:40.011 "min_latency_us": 3649.163636363636, 00:22:40.011 "max_latency_us": 21805.614545454544 00:22:40.011 } 00:22:40.011 ], 00:22:40.011 "core_count": 1 00:22:40.011 } 00:22:40.011 11:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:40.011 11:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:40.011 11:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:40.011 | .driver_specific 00:22:40.011 | .nvme_error 00:22:40.011 | .status_code 00:22:40.011 | .command_transient_transport_error' 00:22:40.011 11:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:40.270 11:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 133 > 0 )) 00:22:40.270 11:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 94097 00:22:40.270 11:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 94097 ']' 00:22:40.270 11:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 94097 00:22:40.270 11:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:22:40.270 11:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:40.270 11:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94097 00:22:40.529 11:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:40.529 11:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:40.529 killing process with pid 94097 00:22:40.529 11:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94097' 00:22:40.529 Received shutdown signal, test time was about 2.000000 seconds 00:22:40.529 00:22:40.529 Latency(us) 00:22:40.529 [2024-11-20T11:35:26.118Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:40.529 [2024-11-20T11:35:26.118Z] =================================================================================================================== 00:22:40.529 [2024-11-20T11:35:26.118Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:40.529 11:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 94097 00:22:40.529 11:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 94097 00:22:40.529 11:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:22:40.529 11:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:22:40.529 11:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:22:40.529 11:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:22:40.529 11:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:22:40.529 11:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=94186 00:22:40.529 11:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:22:40.529 11:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 94186 /var/tmp/bperf.sock 00:22:40.529 11:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 94186 ']' 00:22:40.529 11:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:40.529 11:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:40.529 11:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:40.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:40.529 11:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:40.529 11:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:40.529 [2024-11-20 11:35:26.074471] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:22:40.529 [2024-11-20 11:35:26.074591] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94186 ] 00:22:40.529 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:40.529 Zero copy mechanism will not be used. 00:22:40.787 [2024-11-20 11:35:26.226647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:40.787 [2024-11-20 11:35:26.276661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:41.045 11:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:41.045 11:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:22:41.045 11:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:41.045 11:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:41.303 11:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:41.303 11:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.303 11:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:41.303 11:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.303 11:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:41.303 11:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:41.870 nvme0n1 00:22:41.870 11:35:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:22:41.870 11:35:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.870 11:35:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:41.870 11:35:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.870 11:35:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:41.870 11:35:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:41.870 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:41.870 Zero copy mechanism will not be used. 00:22:41.870 Running I/O for 2 seconds... 00:22:41.870 [2024-11-20 11:35:27.368174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:41.870 [2024-11-20 11:35:27.368240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.870 [2024-11-20 11:35:27.368256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:41.870 [2024-11-20 11:35:27.371720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:41.870 [2024-11-20 11:35:27.371760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.870 [2024-11-20 11:35:27.371774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:41.870 [2024-11-20 11:35:27.375985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:41.870 [2024-11-20 11:35:27.376026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.870 [2024-11-20 11:35:27.376040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:41.870 [2024-11-20 11:35:27.380707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:41.870 [2024-11-20 11:35:27.380770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.870 [2024-11-20 11:35:27.380785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:41.870 [2024-11-20 11:35:27.384421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:41.870 [2024-11-20 11:35:27.384496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.870 [2024-11-20 11:35:27.384511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:41.871 [2024-11-20 11:35:27.389242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:41.871 [2024-11-20 11:35:27.389303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.871 [2024-11-20 11:35:27.389318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:41.871 [2024-11-20 11:35:27.393461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:41.871 [2024-11-20 11:35:27.393524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.871 [2024-11-20 11:35:27.393547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:41.871 [2024-11-20 11:35:27.397198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:41.871 [2024-11-20 11:35:27.397245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.871 [2024-11-20 11:35:27.397260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:41.871 [2024-11-20 11:35:27.402393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:41.871 [2024-11-20 11:35:27.402446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.871 [2024-11-20 11:35:27.402463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:41.871 [2024-11-20 11:35:27.407494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:41.871 [2024-11-20 11:35:27.407543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.871 [2024-11-20 11:35:27.407558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:41.871 [2024-11-20 11:35:27.413140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:41.871 [2024-11-20 11:35:27.413192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.871 [2024-11-20 11:35:27.413208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:41.871 [2024-11-20 11:35:27.416931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:41.871 [2024-11-20 11:35:27.416977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.871 [2024-11-20 11:35:27.416992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:41.871 [2024-11-20 11:35:27.421420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:41.871 [2024-11-20 11:35:27.421469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.871 [2024-11-20 11:35:27.421483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:41.871 [2024-11-20 11:35:27.426861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:41.871 [2024-11-20 11:35:27.426921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.871 [2024-11-20 11:35:27.426936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:41.871 [2024-11-20 11:35:27.430713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:41.871 [2024-11-20 11:35:27.430776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.871 [2024-11-20 11:35:27.430792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:41.871 [2024-11-20 11:35:27.435337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:41.871 [2024-11-20 11:35:27.435394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.871 [2024-11-20 11:35:27.435410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:41.871 [2024-11-20 11:35:27.439876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:41.871 [2024-11-20 11:35:27.439931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.871 [2024-11-20 11:35:27.439946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:41.871 [2024-11-20 11:35:27.444316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:41.871 [2024-11-20 11:35:27.444377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.871 [2024-11-20 11:35:27.444393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:41.871 [2024-11-20 11:35:27.448768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:41.871 [2024-11-20 11:35:27.448819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.871 [2024-11-20 11:35:27.448834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:41.871 [2024-11-20 11:35:27.453832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:41.871 [2024-11-20 11:35:27.453885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.871 [2024-11-20 11:35:27.453901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:42.132 [2024-11-20 11:35:27.457678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.132 [2024-11-20 11:35:27.457782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.132 [2024-11-20 11:35:27.457807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:42.132 [2024-11-20 11:35:27.462109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.132 [2024-11-20 11:35:27.462180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.132 [2024-11-20 11:35:27.462197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:42.132 [2024-11-20 11:35:27.466349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.132 [2024-11-20 11:35:27.466411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.132 [2024-11-20 11:35:27.466426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:42.132 [2024-11-20 11:35:27.469988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.132 [2024-11-20 11:35:27.470032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.132 [2024-11-20 11:35:27.470046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:42.132 [2024-11-20 11:35:27.473710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.132 [2024-11-20 11:35:27.473762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.132 [2024-11-20 11:35:27.473776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:42.132 [2024-11-20 11:35:27.478190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.132 [2024-11-20 11:35:27.478240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.132 [2024-11-20 11:35:27.478255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:42.132 [2024-11-20 11:35:27.481803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.132 [2024-11-20 11:35:27.481846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.132 [2024-11-20 11:35:27.481860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:42.132 [2024-11-20 11:35:27.485661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.132 [2024-11-20 11:35:27.485704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.132 [2024-11-20 11:35:27.485719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:42.132 [2024-11-20 11:35:27.489790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.132 [2024-11-20 11:35:27.489831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.132 [2024-11-20 11:35:27.489845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:42.132 [2024-11-20 11:35:27.493762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.132 [2024-11-20 11:35:27.493808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.132 [2024-11-20 11:35:27.493823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:42.132 [2024-11-20 11:35:27.497866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.132 [2024-11-20 11:35:27.497911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.132 [2024-11-20 11:35:27.497926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:42.132 [2024-11-20 11:35:27.502086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.132 [2024-11-20 11:35:27.502135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.132 [2024-11-20 11:35:27.502149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:42.132 [2024-11-20 11:35:27.506517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.132 [2024-11-20 11:35:27.506566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.132 [2024-11-20 11:35:27.506581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:42.132 [2024-11-20 11:35:27.511848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.132 [2024-11-20 11:35:27.511915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.132 [2024-11-20 11:35:27.511931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:42.132 [2024-11-20 11:35:27.515378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.132 [2024-11-20 11:35:27.515441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.132 [2024-11-20 11:35:27.515456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:42.132 [2024-11-20 11:35:27.520067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.132 [2024-11-20 11:35:27.520129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.132 [2024-11-20 11:35:27.520144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:42.132 [2024-11-20 11:35:27.524952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.133 [2024-11-20 11:35:27.525002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.133 [2024-11-20 11:35:27.525017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:42.133 [2024-11-20 11:35:27.528582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.133 [2024-11-20 11:35:27.528639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.133 [2024-11-20 11:35:27.528654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:42.133 [2024-11-20 11:35:27.532645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.133 [2024-11-20 11:35:27.532691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.133 [2024-11-20 11:35:27.532706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:42.133 [2024-11-20 11:35:27.536989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.133 [2024-11-20 11:35:27.537032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.133 [2024-11-20 11:35:27.537045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:42.133 [2024-11-20 11:35:27.540896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.133 [2024-11-20 11:35:27.540939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.133 [2024-11-20 11:35:27.540954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:42.133 [2024-11-20 11:35:27.544673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.133 [2024-11-20 11:35:27.544716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.133 [2024-11-20 11:35:27.544730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:42.133 [2024-11-20 11:35:27.548847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.133 [2024-11-20 11:35:27.548890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.133 [2024-11-20 11:35:27.548904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:42.133 [2024-11-20 11:35:27.553106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.133 [2024-11-20 11:35:27.553152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.133 [2024-11-20 11:35:27.553168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:42.133 [2024-11-20 11:35:27.557079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.133 [2024-11-20 11:35:27.557124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.133 [2024-11-20 11:35:27.557139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:42.133 [2024-11-20 11:35:27.562089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.133 [2024-11-20 11:35:27.562135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.133 [2024-11-20 11:35:27.562149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:42.133 [2024-11-20 11:35:27.565410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.133 [2024-11-20 11:35:27.565449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.133 [2024-11-20 11:35:27.565463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:42.133 [2024-11-20 11:35:27.570239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.133 [2024-11-20 11:35:27.570304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.133 [2024-11-20 11:35:27.570319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:42.133 [2024-11-20 11:35:27.575379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.133 [2024-11-20 11:35:27.575437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.133 [2024-11-20 11:35:27.575454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:42.133 [2024-11-20 11:35:27.578796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.133 [2024-11-20 11:35:27.578844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.133 [2024-11-20 11:35:27.578859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:42.133 [2024-11-20 11:35:27.583866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.133 [2024-11-20 11:35:27.583919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.133 [2024-11-20 11:35:27.583935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:42.133 [2024-11-20 11:35:27.588240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.133 [2024-11-20 11:35:27.588284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.133 [2024-11-20 11:35:27.588298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:42.133 [2024-11-20 11:35:27.592204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.133 [2024-11-20 11:35:27.592246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.133 [2024-11-20 11:35:27.592261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:42.133 [2024-11-20 11:35:27.596572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.133 [2024-11-20 11:35:27.596629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.133 [2024-11-20 11:35:27.596645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:42.133 [2024-11-20 11:35:27.599440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.133 [2024-11-20 11:35:27.599480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.133 [2024-11-20 11:35:27.599495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:42.133 [2024-11-20 11:35:27.603959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.133 [2024-11-20 11:35:27.604003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.133 [2024-11-20 11:35:27.604018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:42.133 [2024-11-20 11:35:27.608187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.133 [2024-11-20 11:35:27.608227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.133 [2024-11-20 11:35:27.608241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:42.133 [2024-11-20 11:35:27.612533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.133 [2024-11-20 11:35:27.612583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.133 [2024-11-20 11:35:27.612597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:42.133 [2024-11-20 11:35:27.615900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.133 [2024-11-20 11:35:27.615944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.133 [2024-11-20 11:35:27.615959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:42.133 [2024-11-20 11:35:27.620684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.133 [2024-11-20 11:35:27.620748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.133 [2024-11-20 11:35:27.620763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:42.133 [2024-11-20 11:35:27.624601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.133 [2024-11-20 11:35:27.624656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.133 [2024-11-20 11:35:27.624671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:42.133 [2024-11-20 11:35:27.628588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.133 [2024-11-20 11:35:27.628641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.133 [2024-11-20 11:35:27.628655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:42.133 [2024-11-20 11:35:27.632260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.133 [2024-11-20 11:35:27.632300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.133 [2024-11-20 11:35:27.632314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:42.133 [2024-11-20 11:35:27.636551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.134 [2024-11-20 11:35:27.636594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.134 [2024-11-20 11:35:27.636609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:42.134 [2024-11-20 11:35:27.641016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.134 [2024-11-20 11:35:27.641063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.134 [2024-11-20 11:35:27.641079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:42.134 [2024-11-20 11:35:27.645263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.134 [2024-11-20 11:35:27.645315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.134 [2024-11-20 11:35:27.645338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:42.134 [2024-11-20 11:35:27.648791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.134 [2024-11-20 11:35:27.648852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.134 [2024-11-20 11:35:27.648867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:42.134 [2024-11-20 11:35:27.653456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.134 [2024-11-20 11:35:27.653507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.134 [2024-11-20 11:35:27.653522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:42.134 [2024-11-20 11:35:27.658687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.134 [2024-11-20 11:35:27.658742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.134 [2024-11-20 11:35:27.658757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:42.134 [2024-11-20 11:35:27.663539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.134 [2024-11-20 11:35:27.663589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.134 [2024-11-20 11:35:27.663604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:42.134 [2024-11-20 11:35:27.666405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.134 [2024-11-20 11:35:27.666445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.134 [2024-11-20 11:35:27.666459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:42.134 [2024-11-20 11:35:27.671524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.134 [2024-11-20 11:35:27.671571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.134 [2024-11-20 11:35:27.671584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:42.134 [2024-11-20 11:35:27.675047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.134 [2024-11-20 11:35:27.675085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.134 [2024-11-20 11:35:27.675099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:42.134 [2024-11-20 11:35:27.679454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.134 [2024-11-20 11:35:27.679497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.134 [2024-11-20 11:35:27.679511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:42.134 [2024-11-20 11:35:27.682963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.134 [2024-11-20 11:35:27.683006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.134 [2024-11-20 11:35:27.683021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:42.134 [2024-11-20 11:35:27.687337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.134 [2024-11-20 11:35:27.687390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.134 [2024-11-20 11:35:27.687405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:42.134 [2024-11-20 11:35:27.691327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.134 [2024-11-20 11:35:27.691377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.134 [2024-11-20 11:35:27.691393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:42.134 [2024-11-20 11:35:27.696110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.134 [2024-11-20 11:35:27.696181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.134 [2024-11-20 11:35:27.696198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:42.134 [2024-11-20 11:35:27.700340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.134 [2024-11-20 11:35:27.700422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.134 [2024-11-20 11:35:27.700448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:42.134 [2024-11-20 11:35:27.706338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.134 [2024-11-20 11:35:27.706441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.134 [2024-11-20 11:35:27.706473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:42.134 [2024-11-20 11:35:27.712153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.134 [2024-11-20 11:35:27.712216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.134 [2024-11-20 11:35:27.712232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:42.396 [2024-11-20 11:35:27.717268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.396 [2024-11-20 11:35:27.717317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.396 [2024-11-20 11:35:27.717332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:42.396 [2024-11-20 11:35:27.721732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.396 [2024-11-20 11:35:27.721783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.396 [2024-11-20 11:35:27.721797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:42.396 [2024-11-20 11:35:27.725399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.396 [2024-11-20 11:35:27.725445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.396 [2024-11-20 11:35:27.725460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:42.396 [2024-11-20 11:35:27.730177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.396 [2024-11-20 11:35:27.730225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.396 [2024-11-20 11:35:27.730239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:42.396 [2024-11-20 11:35:27.734473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.396 [2024-11-20 11:35:27.734531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.396 [2024-11-20 11:35:27.734546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:42.396 [2024-11-20 11:35:27.738029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.396 [2024-11-20 11:35:27.738091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.396 [2024-11-20 11:35:27.738106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:42.396 [2024-11-20 11:35:27.742579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.396 [2024-11-20 11:35:27.742658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.396 [2024-11-20 11:35:27.742674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:42.396 [2024-11-20 11:35:27.746997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.396 [2024-11-20 11:35:27.747046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.396 [2024-11-20 11:35:27.747060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:42.396 [2024-11-20 11:35:27.750294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.396 [2024-11-20 11:35:27.750332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.396 [2024-11-20 11:35:27.750346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:42.396 [2024-11-20 11:35:27.755462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.396 [2024-11-20 11:35:27.755520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.396 [2024-11-20 11:35:27.755534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:42.396 [2024-11-20 11:35:27.760817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.396 [2024-11-20 11:35:27.760888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.396 [2024-11-20 11:35:27.760904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:42.396 [2024-11-20 11:35:27.764463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.396 [2024-11-20 11:35:27.764536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.396 [2024-11-20 11:35:27.764552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:42.396 [2024-11-20 11:35:27.769366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.396 [2024-11-20 11:35:27.769429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.396 [2024-11-20 11:35:27.769444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:42.396 [2024-11-20 11:35:27.774590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.396 [2024-11-20 11:35:27.774649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.396 [2024-11-20 11:35:27.774664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:42.396 [2024-11-20 11:35:27.779555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.396 [2024-11-20 11:35:27.779599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.396 [2024-11-20 11:35:27.779628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:42.396 [2024-11-20 11:35:27.782882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.396 [2024-11-20 11:35:27.782920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.396 [2024-11-20 11:35:27.782935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:42.396 [2024-11-20 11:35:27.786870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.396 [2024-11-20 11:35:27.786910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.396 [2024-11-20 11:35:27.786923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:42.396 [2024-11-20 11:35:27.791000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.396 [2024-11-20 11:35:27.791040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.396 [2024-11-20 11:35:27.791055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:42.396 [2024-11-20 11:35:27.794773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.396 [2024-11-20 11:35:27.794814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.396 [2024-11-20 11:35:27.794828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:42.396 [2024-11-20 11:35:27.798233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.396 [2024-11-20 11:35:27.798273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.396 [2024-11-20 11:35:27.798286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:42.396 [2024-11-20 11:35:27.803295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.396 [2024-11-20 11:35:27.803337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.396 [2024-11-20 11:35:27.803350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:42.396 [2024-11-20 11:35:27.807760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.397 [2024-11-20 11:35:27.807801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.397 [2024-11-20 11:35:27.807815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:42.397 [2024-11-20 11:35:27.810561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.397 [2024-11-20 11:35:27.810596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.397 [2024-11-20 11:35:27.810610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:42.397 [2024-11-20 11:35:27.815578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.397 [2024-11-20 11:35:27.815632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.397 [2024-11-20 11:35:27.815647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:42.397 [2024-11-20 11:35:27.819408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.397 [2024-11-20 11:35:27.819447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.397 [2024-11-20 11:35:27.819460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:42.397 [2024-11-20 11:35:27.823256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.397 [2024-11-20 11:35:27.823297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.397 [2024-11-20 11:35:27.823311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:42.397 [2024-11-20 11:35:27.827441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.397 [2024-11-20 11:35:27.827484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.397 [2024-11-20 11:35:27.827498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:42.397 [2024-11-20 11:35:27.831176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.397 [2024-11-20 11:35:27.831224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.397 [2024-11-20 11:35:27.831238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:42.397 [2024-11-20 11:35:27.835039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.397 [2024-11-20 11:35:27.835082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.397 [2024-11-20 11:35:27.835097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:42.397 [2024-11-20 11:35:27.838369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.397 [2024-11-20 11:35:27.838406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.397 [2024-11-20 11:35:27.838419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:42.397 [2024-11-20 11:35:27.842246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.397 [2024-11-20 11:35:27.842285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.397 [2024-11-20 11:35:27.842300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:42.397 [2024-11-20 11:35:27.846005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.397 [2024-11-20 11:35:27.846048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.397 [2024-11-20 11:35:27.846062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:42.397 [2024-11-20 11:35:27.850314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.397 [2024-11-20 11:35:27.850364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.397 [2024-11-20 11:35:27.850378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:42.397 [2024-11-20 11:35:27.854173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.397 [2024-11-20 11:35:27.854223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.397 [2024-11-20 11:35:27.854237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:42.397 [2024-11-20 11:35:27.858539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.397 [2024-11-20 11:35:27.858600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.397 [2024-11-20 11:35:27.858628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:42.397 [2024-11-20 11:35:27.862060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.397 [2024-11-20 11:35:27.862103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.397 [2024-11-20 11:35:27.862117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:42.397 [2024-11-20 11:35:27.866376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.397 [2024-11-20 11:35:27.866418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.397 [2024-11-20 11:35:27.866432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:42.397 [2024-11-20 11:35:27.869600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.397 [2024-11-20 11:35:27.869646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.397 [2024-11-20 11:35:27.869660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:42.397 [2024-11-20 11:35:27.874047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.397 [2024-11-20 11:35:27.874093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.397 [2024-11-20 11:35:27.874109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:42.397 [2024-11-20 11:35:27.878510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.397 [2024-11-20 11:35:27.878556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.397 [2024-11-20 11:35:27.878571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:42.397 [2024-11-20 11:35:27.882312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.397 [2024-11-20 11:35:27.882370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.397 [2024-11-20 11:35:27.882386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:42.397 [2024-11-20 11:35:27.887597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.397 [2024-11-20 11:35:27.887673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.397 [2024-11-20 11:35:27.887688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:42.397 [2024-11-20 11:35:27.892926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.397 [2024-11-20 11:35:27.892995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.397 [2024-11-20 11:35:27.893012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:42.397 [2024-11-20 11:35:27.896891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.397 [2024-11-20 11:35:27.896949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.397 [2024-11-20 11:35:27.896964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:42.397 [2024-11-20 11:35:27.901603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.397 [2024-11-20 11:35:27.901683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.397 [2024-11-20 11:35:27.901699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:42.397 [2024-11-20 11:35:27.907058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.397 [2024-11-20 11:35:27.907131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.397 [2024-11-20 11:35:27.907147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:42.397 [2024-11-20 11:35:27.910845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.397 [2024-11-20 11:35:27.910903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.397 [2024-11-20 11:35:27.910918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:42.397 [2024-11-20 11:35:27.915302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.397 [2024-11-20 11:35:27.915343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.397 [2024-11-20 11:35:27.915357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:42.397 [2024-11-20 11:35:27.918991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.398 [2024-11-20 11:35:27.919030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.398 [2024-11-20 11:35:27.919043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:42.398 [2024-11-20 11:35:27.922546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.398 [2024-11-20 11:35:27.922584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.398 [2024-11-20 11:35:27.922597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:42.398 [2024-11-20 11:35:27.927171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.398 [2024-11-20 11:35:27.927214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.398 [2024-11-20 11:35:27.927229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:42.398 [2024-11-20 11:35:27.931041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.398 [2024-11-20 11:35:27.931080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.398 [2024-11-20 11:35:27.931094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:42.398 [2024-11-20 11:35:27.935003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.398 [2024-11-20 11:35:27.935042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.398 [2024-11-20 11:35:27.935056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:42.398 [2024-11-20 11:35:27.938650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.398 [2024-11-20 11:35:27.938687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.398 [2024-11-20 11:35:27.938701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:42.398 [2024-11-20 11:35:27.942737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.398 [2024-11-20 11:35:27.942776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.398 [2024-11-20 11:35:27.942789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:42.398 [2024-11-20 11:35:27.946977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.398 [2024-11-20 11:35:27.947016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.398 [2024-11-20 11:35:27.947030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:42.398 [2024-11-20 11:35:27.951053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.398 [2024-11-20 11:35:27.951094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.398 [2024-11-20 11:35:27.951107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:42.398 [2024-11-20 11:35:27.955072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.398 [2024-11-20 11:35:27.955110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.398 [2024-11-20 11:35:27.955124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:42.398 [2024-11-20 11:35:27.959337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.398 [2024-11-20 11:35:27.959381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.398 [2024-11-20 11:35:27.959394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:42.398 [2024-11-20 11:35:27.963237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.398 [2024-11-20 11:35:27.963277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.398 [2024-11-20 11:35:27.963292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:42.398 [2024-11-20 11:35:27.968484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.398 [2024-11-20 11:35:27.968555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.398 [2024-11-20 11:35:27.968578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:42.398 [2024-11-20 11:35:27.975366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.398 [2024-11-20 11:35:27.975437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.398 [2024-11-20 11:35:27.975463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:42.709 [2024-11-20 11:35:27.982005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.709 [2024-11-20 11:35:27.982084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.709 [2024-11-20 11:35:27.982107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:42.709 [2024-11-20 11:35:27.989151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.709 [2024-11-20 11:35:27.989233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.709 [2024-11-20 11:35:27.989256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:42.709 [2024-11-20 11:35:27.994330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.709 [2024-11-20 11:35:27.994380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.709 [2024-11-20 11:35:27.994396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:42.709 [2024-11-20 11:35:27.999228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.709 [2024-11-20 11:35:27.999298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.709 [2024-11-20 11:35:27.999322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:42.709 [2024-11-20 11:35:28.005700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.709 [2024-11-20 11:35:28.005791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.709 [2024-11-20 11:35:28.005815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:42.709 [2024-11-20 11:35:28.013157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.709 [2024-11-20 11:35:28.013254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.709 [2024-11-20 11:35:28.013281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:42.709 [2024-11-20 11:35:28.020905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.709 [2024-11-20 11:35:28.020992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.709 [2024-11-20 11:35:28.021019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:42.709 [2024-11-20 11:35:28.028212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.709 [2024-11-20 11:35:28.028290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.709 [2024-11-20 11:35:28.028315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:42.709 [2024-11-20 11:35:28.035515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.709 [2024-11-20 11:35:28.035605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.710 [2024-11-20 11:35:28.035654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:42.710 [2024-11-20 11:35:28.042927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.710 [2024-11-20 11:35:28.042991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.710 [2024-11-20 11:35:28.043014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:42.710 [2024-11-20 11:35:28.050081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.710 [2024-11-20 11:35:28.050151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.710 [2024-11-20 11:35:28.050177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:42.710 [2024-11-20 11:35:28.057435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.710 [2024-11-20 11:35:28.057513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.710 [2024-11-20 11:35:28.057536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:42.710 [2024-11-20 11:35:28.064739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.710 [2024-11-20 11:35:28.064809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.710 [2024-11-20 11:35:28.064833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:42.710 [2024-11-20 11:35:28.071947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.710 [2024-11-20 11:35:28.072018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.710 [2024-11-20 11:35:28.072043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:42.710 [2024-11-20 11:35:28.079218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.710 [2024-11-20 11:35:28.079294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.710 [2024-11-20 11:35:28.079318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:42.710 [2024-11-20 11:35:28.086512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.710 [2024-11-20 11:35:28.086578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.710 [2024-11-20 11:35:28.086603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:42.710 [2024-11-20 11:35:28.093841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.710 [2024-11-20 11:35:28.093915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.710 [2024-11-20 11:35:28.093940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:42.710 [2024-11-20 11:35:28.101136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.710 [2024-11-20 11:35:28.101232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.710 [2024-11-20 11:35:28.101255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:42.710 [2024-11-20 11:35:28.108246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.710 [2024-11-20 11:35:28.108318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.710 [2024-11-20 11:35:28.108344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:42.710 [2024-11-20 11:35:28.115162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.710 [2024-11-20 11:35:28.115223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.710 [2024-11-20 11:35:28.115247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:42.710 [2024-11-20 11:35:28.122351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.710 [2024-11-20 11:35:28.122420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.710 [2024-11-20 11:35:28.122445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:42.710 [2024-11-20 11:35:28.129282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.710 [2024-11-20 11:35:28.129355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.710 [2024-11-20 11:35:28.129381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:42.710 [2024-11-20 11:35:28.137083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.710 [2024-11-20 11:35:28.137185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.710 [2024-11-20 11:35:28.137214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:42.710 [2024-11-20 11:35:28.144817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.710 [2024-11-20 11:35:28.144918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.710 [2024-11-20 11:35:28.144943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:42.710 [2024-11-20 11:35:28.152684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.710 [2024-11-20 11:35:28.152783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.710 [2024-11-20 11:35:28.152809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:42.710 [2024-11-20 11:35:28.159890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.710 [2024-11-20 11:35:28.159977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.710 [2024-11-20 11:35:28.160004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:42.710 [2024-11-20 11:35:28.167260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.710 [2024-11-20 11:35:28.167342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.710 [2024-11-20 11:35:28.167367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:42.710 [2024-11-20 11:35:28.174713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.710 [2024-11-20 11:35:28.174783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.710 [2024-11-20 11:35:28.174806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:42.710 [2024-11-20 11:35:28.181943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.710 [2024-11-20 11:35:28.182039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.710 [2024-11-20 11:35:28.182062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:42.710 [2024-11-20 11:35:28.189195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.710 [2024-11-20 11:35:28.189278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.710 [2024-11-20 11:35:28.189301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:42.710 [2024-11-20 11:35:28.196285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.710 [2024-11-20 11:35:28.196355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.710 [2024-11-20 11:35:28.196380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:42.710 [2024-11-20 11:35:28.203193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.710 [2024-11-20 11:35:28.203255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.710 [2024-11-20 11:35:28.203279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:42.710 [2024-11-20 11:35:28.210453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.710 [2024-11-20 11:35:28.210519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.710 [2024-11-20 11:35:28.210544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:42.710 [2024-11-20 11:35:28.217419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.710 [2024-11-20 11:35:28.217498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.710 [2024-11-20 11:35:28.217522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:42.710 [2024-11-20 11:35:28.224900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.710 [2024-11-20 11:35:28.224984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.710 [2024-11-20 11:35:28.225011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:42.710 [2024-11-20 11:35:28.231830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.711 [2024-11-20 11:35:28.231911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.711 [2024-11-20 11:35:28.231935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:42.711 [2024-11-20 11:35:28.238998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.711 [2024-11-20 11:35:28.239080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.711 [2024-11-20 11:35:28.239103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:42.711 [2024-11-20 11:35:28.246260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.711 [2024-11-20 11:35:28.246341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.711 [2024-11-20 11:35:28.246365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:42.711 [2024-11-20 11:35:28.252150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.711 [2024-11-20 11:35:28.252204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.711 [2024-11-20 11:35:28.252220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:42.711 [2024-11-20 11:35:28.257076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.711 [2024-11-20 11:35:28.257130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.711 [2024-11-20 11:35:28.257145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:42.711 [2024-11-20 11:35:28.262684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.711 [2024-11-20 11:35:28.262743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.711 [2024-11-20 11:35:28.262759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:42.711 [2024-11-20 11:35:28.267824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.711 [2024-11-20 11:35:28.267876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.711 [2024-11-20 11:35:28.267892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:42.711 [2024-11-20 11:35:28.270771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.711 [2024-11-20 11:35:28.270814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.711 [2024-11-20 11:35:28.270828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:42.711 [2024-11-20 11:35:28.276212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.711 [2024-11-20 11:35:28.276266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.711 [2024-11-20 11:35:28.276282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:42.711 [2024-11-20 11:35:28.279690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.711 [2024-11-20 11:35:28.279728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.711 [2024-11-20 11:35:28.279743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:42.711 [2024-11-20 11:35:28.284066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.711 [2024-11-20 11:35:28.284111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.711 [2024-11-20 11:35:28.284125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:42.711 [2024-11-20 11:35:28.288810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.711 [2024-11-20 11:35:28.288861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.711 [2024-11-20 11:35:28.288877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:42.711 [2024-11-20 11:35:28.293604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.711 [2024-11-20 11:35:28.293658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.711 [2024-11-20 11:35:28.293672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:42.972 [2024-11-20 11:35:28.296669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.972 [2024-11-20 11:35:28.296711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.972 [2024-11-20 11:35:28.296726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:42.972 [2024-11-20 11:35:28.301199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.972 [2024-11-20 11:35:28.301245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.972 [2024-11-20 11:35:28.301259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:42.972 [2024-11-20 11:35:28.305390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.972 [2024-11-20 11:35:28.305434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.972 [2024-11-20 11:35:28.305448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:42.972 [2024-11-20 11:35:28.308635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.972 [2024-11-20 11:35:28.308674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.972 [2024-11-20 11:35:28.308688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:42.972 [2024-11-20 11:35:28.313009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.972 [2024-11-20 11:35:28.313059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.972 [2024-11-20 11:35:28.313074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:42.972 [2024-11-20 11:35:28.317109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.972 [2024-11-20 11:35:28.317168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.972 [2024-11-20 11:35:28.317182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:42.972 [2024-11-20 11:35:28.321146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.972 [2024-11-20 11:35:28.321191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.972 [2024-11-20 11:35:28.321206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:42.972 [2024-11-20 11:35:28.325591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.972 [2024-11-20 11:35:28.325652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.972 [2024-11-20 11:35:28.325667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:42.972 [2024-11-20 11:35:28.329066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.972 [2024-11-20 11:35:28.329114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.972 [2024-11-20 11:35:28.329128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:42.972 [2024-11-20 11:35:28.333438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.972 [2024-11-20 11:35:28.333489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.972 [2024-11-20 11:35:28.333504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:42.972 [2024-11-20 11:35:28.337300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.972 [2024-11-20 11:35:28.337351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.972 [2024-11-20 11:35:28.337366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:42.972 [2024-11-20 11:35:28.342036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.972 [2024-11-20 11:35:28.342089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.972 [2024-11-20 11:35:28.342103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:42.972 [2024-11-20 11:35:28.346011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.972 [2024-11-20 11:35:28.346059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.972 [2024-11-20 11:35:28.346074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:42.972 [2024-11-20 11:35:28.349808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.972 [2024-11-20 11:35:28.349853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.972 [2024-11-20 11:35:28.349867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:42.972 [2024-11-20 11:35:28.353942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.972 [2024-11-20 11:35:28.353993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.972 [2024-11-20 11:35:28.354008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:42.972 6379.00 IOPS, 797.38 MiB/s [2024-11-20T11:35:28.561Z] [2024-11-20 11:35:28.359865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.972 [2024-11-20 11:35:28.359915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.972 [2024-11-20 11:35:28.359931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:42.972 [2024-11-20 11:35:28.363998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.972 [2024-11-20 11:35:28.364050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.972 [2024-11-20 11:35:28.364065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:42.972 [2024-11-20 11:35:28.367575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.972 [2024-11-20 11:35:28.367634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.972 [2024-11-20 11:35:28.367650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:42.972 [2024-11-20 11:35:28.371980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.972 [2024-11-20 11:35:28.372026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.972 [2024-11-20 11:35:28.372041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:42.972 [2024-11-20 11:35:28.377260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.973 [2024-11-20 11:35:28.377311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.973 [2024-11-20 11:35:28.377326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:42.973 [2024-11-20 11:35:28.380785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.973 [2024-11-20 11:35:28.380828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.973 [2024-11-20 11:35:28.380842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:42.973 [2024-11-20 11:35:28.385204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.973 [2024-11-20 11:35:28.385253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.973 [2024-11-20 11:35:28.385268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:42.973 [2024-11-20 11:35:28.388861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.973 [2024-11-20 11:35:28.388907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.973 [2024-11-20 11:35:28.388922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:42.973 [2024-11-20 11:35:28.392876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.973 [2024-11-20 11:35:28.392924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.973 [2024-11-20 11:35:28.392939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:42.973 [2024-11-20 11:35:28.397562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.973 [2024-11-20 11:35:28.397627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.973 [2024-11-20 11:35:28.397644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:42.973 [2024-11-20 11:35:28.401188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.973 [2024-11-20 11:35:28.401237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.973 [2024-11-20 11:35:28.401253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:42.973 [2024-11-20 11:35:28.406502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.973 [2024-11-20 11:35:28.406557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.973 [2024-11-20 11:35:28.406574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:42.973 [2024-11-20 11:35:28.411093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.973 [2024-11-20 11:35:28.411144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.973 [2024-11-20 11:35:28.411159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:42.973 [2024-11-20 11:35:28.414746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.973 [2024-11-20 11:35:28.414794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.973 [2024-11-20 11:35:28.414809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:42.973 [2024-11-20 11:35:28.419257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.973 [2024-11-20 11:35:28.419324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.973 [2024-11-20 11:35:28.419343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:42.973 [2024-11-20 11:35:28.423716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.973 [2024-11-20 11:35:28.423767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.973 [2024-11-20 11:35:28.423782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:42.973 [2024-11-20 11:35:28.427144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.973 [2024-11-20 11:35:28.427202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.973 [2024-11-20 11:35:28.427229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:42.973 [2024-11-20 11:35:28.432018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.973 [2024-11-20 11:35:28.432075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.973 [2024-11-20 11:35:28.432098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:42.973 [2024-11-20 11:35:28.438098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.973 [2024-11-20 11:35:28.438163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.973 [2024-11-20 11:35:28.438186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:42.973 [2024-11-20 11:35:28.444292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.973 [2024-11-20 11:35:28.444359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.973 [2024-11-20 11:35:28.444382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:42.973 [2024-11-20 11:35:28.449015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.973 [2024-11-20 11:35:28.449078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.973 [2024-11-20 11:35:28.449102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:42.973 [2024-11-20 11:35:28.454168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.973 [2024-11-20 11:35:28.454230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.973 [2024-11-20 11:35:28.454252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:42.973 [2024-11-20 11:35:28.460255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.973 [2024-11-20 11:35:28.460327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.973 [2024-11-20 11:35:28.460354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:42.973 [2024-11-20 11:35:28.466061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.973 [2024-11-20 11:35:28.466131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.973 [2024-11-20 11:35:28.466157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:42.973 [2024-11-20 11:35:28.470862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.973 [2024-11-20 11:35:28.470916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.973 [2024-11-20 11:35:28.470939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:42.973 [2024-11-20 11:35:28.475872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.973 [2024-11-20 11:35:28.475929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.973 [2024-11-20 11:35:28.475953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:42.973 [2024-11-20 11:35:28.481686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.973 [2024-11-20 11:35:28.481760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.973 [2024-11-20 11:35:28.481784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:42.973 [2024-11-20 11:35:28.486029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.973 [2024-11-20 11:35:28.486087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.973 [2024-11-20 11:35:28.486110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:42.973 [2024-11-20 11:35:28.492126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.973 [2024-11-20 11:35:28.492195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.973 [2024-11-20 11:35:28.492219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:42.973 [2024-11-20 11:35:28.497000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.973 [2024-11-20 11:35:28.497056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.973 [2024-11-20 11:35:28.497079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:42.973 [2024-11-20 11:35:28.502067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.973 [2024-11-20 11:35:28.502118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.974 [2024-11-20 11:35:28.502140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:42.974 [2024-11-20 11:35:28.507304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.974 [2024-11-20 11:35:28.507360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.974 [2024-11-20 11:35:28.507383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:42.974 [2024-11-20 11:35:28.512344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.974 [2024-11-20 11:35:28.512401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.974 [2024-11-20 11:35:28.512425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:42.974 [2024-11-20 11:35:28.517575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.974 [2024-11-20 11:35:28.517645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.974 [2024-11-20 11:35:28.517671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:42.974 [2024-11-20 11:35:28.523110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.974 [2024-11-20 11:35:28.523169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.974 [2024-11-20 11:35:28.523192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:42.974 [2024-11-20 11:35:28.526707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.974 [2024-11-20 11:35:28.526759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.974 [2024-11-20 11:35:28.526782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:42.974 [2024-11-20 11:35:28.530978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.974 [2024-11-20 11:35:28.531031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.974 [2024-11-20 11:35:28.531054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:42.974 [2024-11-20 11:35:28.535741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.974 [2024-11-20 11:35:28.535797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.974 [2024-11-20 11:35:28.535823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:42.974 [2024-11-20 11:35:28.539882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.974 [2024-11-20 11:35:28.539939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.974 [2024-11-20 11:35:28.539963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:42.974 [2024-11-20 11:35:28.544741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.974 [2024-11-20 11:35:28.544800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.974 [2024-11-20 11:35:28.544826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:42.974 [2024-11-20 11:35:28.549732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.974 [2024-11-20 11:35:28.549801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.974 [2024-11-20 11:35:28.549824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:42.974 [2024-11-20 11:35:28.554508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:42.974 [2024-11-20 11:35:28.554562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.974 [2024-11-20 11:35:28.554585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:43.235 [2024-11-20 11:35:28.559604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.235 [2024-11-20 11:35:28.559681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.235 [2024-11-20 11:35:28.559706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:43.235 [2024-11-20 11:35:28.562686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.235 [2024-11-20 11:35:28.562741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.235 [2024-11-20 11:35:28.562763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:43.235 [2024-11-20 11:35:28.567278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.235 [2024-11-20 11:35:28.567340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.235 [2024-11-20 11:35:28.567363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:43.235 [2024-11-20 11:35:28.571925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.235 [2024-11-20 11:35:28.571981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.235 [2024-11-20 11:35:28.572004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:43.235 [2024-11-20 11:35:28.576399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.235 [2024-11-20 11:35:28.576457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.235 [2024-11-20 11:35:28.576482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:43.235 [2024-11-20 11:35:28.580183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.235 [2024-11-20 11:35:28.580245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.235 [2024-11-20 11:35:28.580268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:43.235 [2024-11-20 11:35:28.584926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.235 [2024-11-20 11:35:28.584986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.235 [2024-11-20 11:35:28.585009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:43.235 [2024-11-20 11:35:28.589928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.235 [2024-11-20 11:35:28.589987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.235 [2024-11-20 11:35:28.590009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:43.235 [2024-11-20 11:35:28.595628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.235 [2024-11-20 11:35:28.595684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.235 [2024-11-20 11:35:28.595709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:43.235 [2024-11-20 11:35:28.599721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.235 [2024-11-20 11:35:28.599769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.235 [2024-11-20 11:35:28.599792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:43.235 [2024-11-20 11:35:28.604123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.235 [2024-11-20 11:35:28.604173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.235 [2024-11-20 11:35:28.604196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:43.235 [2024-11-20 11:35:28.608797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.235 [2024-11-20 11:35:28.608847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.235 [2024-11-20 11:35:28.608870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:43.235 [2024-11-20 11:35:28.612466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.235 [2024-11-20 11:35:28.612517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.235 [2024-11-20 11:35:28.612539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:43.235 [2024-11-20 11:35:28.616860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.235 [2024-11-20 11:35:28.616910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.235 [2024-11-20 11:35:28.616933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:43.235 [2024-11-20 11:35:28.621349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.235 [2024-11-20 11:35:28.621399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.235 [2024-11-20 11:35:28.621423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:43.235 [2024-11-20 11:35:28.625833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.235 [2024-11-20 11:35:28.625879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.235 [2024-11-20 11:35:28.625901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:43.235 [2024-11-20 11:35:28.630129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.235 [2024-11-20 11:35:28.630179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.235 [2024-11-20 11:35:28.630201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:43.235 [2024-11-20 11:35:28.634327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.235 [2024-11-20 11:35:28.634377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.235 [2024-11-20 11:35:28.634400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:43.235 [2024-11-20 11:35:28.638405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.235 [2024-11-20 11:35:28.638460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.235 [2024-11-20 11:35:28.638482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:43.235 [2024-11-20 11:35:28.642597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.235 [2024-11-20 11:35:28.642664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.235 [2024-11-20 11:35:28.642687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:43.235 [2024-11-20 11:35:28.647553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.236 [2024-11-20 11:35:28.647608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.236 [2024-11-20 11:35:28.647646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:43.236 [2024-11-20 11:35:28.651609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.236 [2024-11-20 11:35:28.651673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.236 [2024-11-20 11:35:28.651698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:43.236 [2024-11-20 11:35:28.655528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.236 [2024-11-20 11:35:28.655581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.236 [2024-11-20 11:35:28.655603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:43.236 [2024-11-20 11:35:28.660129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.236 [2024-11-20 11:35:28.660179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.236 [2024-11-20 11:35:28.660201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:43.236 [2024-11-20 11:35:28.664370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.236 [2024-11-20 11:35:28.664420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.236 [2024-11-20 11:35:28.664443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:43.236 [2024-11-20 11:35:28.668077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.236 [2024-11-20 11:35:28.668128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.236 [2024-11-20 11:35:28.668151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:43.236 [2024-11-20 11:35:28.672380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.236 [2024-11-20 11:35:28.672429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.236 [2024-11-20 11:35:28.672454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:43.236 [2024-11-20 11:35:28.676440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.236 [2024-11-20 11:35:28.676488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.236 [2024-11-20 11:35:28.676510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:43.236 [2024-11-20 11:35:28.680526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.236 [2024-11-20 11:35:28.680574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.236 [2024-11-20 11:35:28.680596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:43.236 [2024-11-20 11:35:28.684709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.236 [2024-11-20 11:35:28.684761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.236 [2024-11-20 11:35:28.684785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:43.236 [2024-11-20 11:35:28.688068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.236 [2024-11-20 11:35:28.688115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.236 [2024-11-20 11:35:28.688138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:43.236 [2024-11-20 11:35:28.694049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.236 [2024-11-20 11:35:28.694112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.236 [2024-11-20 11:35:28.694139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:43.236 [2024-11-20 11:35:28.699037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.236 [2024-11-20 11:35:28.699090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.236 [2024-11-20 11:35:28.699113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:43.236 [2024-11-20 11:35:28.704495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.236 [2024-11-20 11:35:28.704592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.236 [2024-11-20 11:35:28.704644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:43.236 [2024-11-20 11:35:28.710714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.236 [2024-11-20 11:35:28.710781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.236 [2024-11-20 11:35:28.710796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:43.236 [2024-11-20 11:35:28.714500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.236 [2024-11-20 11:35:28.714585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.236 [2024-11-20 11:35:28.714611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:43.236 [2024-11-20 11:35:28.721539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.236 [2024-11-20 11:35:28.721638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.236 [2024-11-20 11:35:28.721676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:43.236 [2024-11-20 11:35:28.728381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.236 [2024-11-20 11:35:28.728463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.236 [2024-11-20 11:35:28.728496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:43.236 [2024-11-20 11:35:28.732489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.236 [2024-11-20 11:35:28.732542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.236 [2024-11-20 11:35:28.732568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:43.236 [2024-11-20 11:35:28.738314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.236 [2024-11-20 11:35:28.738394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.236 [2024-11-20 11:35:28.738410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:43.236 [2024-11-20 11:35:28.743184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.236 [2024-11-20 11:35:28.743284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.236 [2024-11-20 11:35:28.743300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:43.236 [2024-11-20 11:35:28.748277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.236 [2024-11-20 11:35:28.748332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.236 [2024-11-20 11:35:28.748347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:43.236 [2024-11-20 11:35:28.753603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.236 [2024-11-20 11:35:28.753686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.236 [2024-11-20 11:35:28.753701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:43.236 [2024-11-20 11:35:28.758654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.236 [2024-11-20 11:35:28.758705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.236 [2024-11-20 11:35:28.758719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:43.236 [2024-11-20 11:35:28.763508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.236 [2024-11-20 11:35:28.763552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.236 [2024-11-20 11:35:28.763566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:43.236 [2024-11-20 11:35:28.768010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.236 [2024-11-20 11:35:28.768050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.236 [2024-11-20 11:35:28.768064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:43.236 [2024-11-20 11:35:28.773245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.236 [2024-11-20 11:35:28.773288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.236 [2024-11-20 11:35:28.773303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:43.236 [2024-11-20 11:35:28.777708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.237 [2024-11-20 11:35:28.777758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.237 [2024-11-20 11:35:28.777772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:43.237 [2024-11-20 11:35:28.783099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.237 [2024-11-20 11:35:28.783144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.237 [2024-11-20 11:35:28.783159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:43.237 [2024-11-20 11:35:28.787932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.237 [2024-11-20 11:35:28.787975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.237 [2024-11-20 11:35:28.787990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:43.237 [2024-11-20 11:35:28.792029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.237 [2024-11-20 11:35:28.792072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.237 [2024-11-20 11:35:28.792086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:43.237 [2024-11-20 11:35:28.797499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.237 [2024-11-20 11:35:28.797542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.237 [2024-11-20 11:35:28.797557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:43.237 [2024-11-20 11:35:28.802031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.237 [2024-11-20 11:35:28.802073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.237 [2024-11-20 11:35:28.802086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:43.237 [2024-11-20 11:35:28.806711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.237 [2024-11-20 11:35:28.806754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.237 [2024-11-20 11:35:28.806768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:43.237 [2024-11-20 11:35:28.811710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.237 [2024-11-20 11:35:28.811752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.237 [2024-11-20 11:35:28.811766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:43.237 [2024-11-20 11:35:28.816357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.237 [2024-11-20 11:35:28.816395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.237 [2024-11-20 11:35:28.816408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:43.496 [2024-11-20 11:35:28.821143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.496 [2024-11-20 11:35:28.821192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.496 [2024-11-20 11:35:28.821206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:43.496 [2024-11-20 11:35:28.826490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.496 [2024-11-20 11:35:28.826552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.496 [2024-11-20 11:35:28.826568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:43.496 [2024-11-20 11:35:28.831007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.496 [2024-11-20 11:35:28.831057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.496 [2024-11-20 11:35:28.831072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:43.496 [2024-11-20 11:35:28.835112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.496 [2024-11-20 11:35:28.835154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.496 [2024-11-20 11:35:28.835167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:43.496 [2024-11-20 11:35:28.838992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.496 [2024-11-20 11:35:28.839037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.496 [2024-11-20 11:35:28.839052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:43.496 [2024-11-20 11:35:28.844782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.496 [2024-11-20 11:35:28.844832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.496 [2024-11-20 11:35:28.844846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:43.496 [2024-11-20 11:35:28.850713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.496 [2024-11-20 11:35:28.850754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.496 [2024-11-20 11:35:28.850767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:43.496 [2024-11-20 11:35:28.857234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.496 [2024-11-20 11:35:28.857275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.496 [2024-11-20 11:35:28.857289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:43.496 [2024-11-20 11:35:28.863420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.496 [2024-11-20 11:35:28.863464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.496 [2024-11-20 11:35:28.863478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:43.496 [2024-11-20 11:35:28.866860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.496 [2024-11-20 11:35:28.866901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.496 [2024-11-20 11:35:28.866915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:43.496 [2024-11-20 11:35:28.870832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.496 [2024-11-20 11:35:28.870872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.496 [2024-11-20 11:35:28.870886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:43.496 [2024-11-20 11:35:28.875765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.497 [2024-11-20 11:35:28.875807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.497 [2024-11-20 11:35:28.875821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:43.497 [2024-11-20 11:35:28.880335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.497 [2024-11-20 11:35:28.880400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.497 [2024-11-20 11:35:28.880415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:43.497 [2024-11-20 11:35:28.886399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.497 [2024-11-20 11:35:28.886463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.497 [2024-11-20 11:35:28.886478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:43.497 [2024-11-20 11:35:28.890937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.497 [2024-11-20 11:35:28.890997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.497 [2024-11-20 11:35:28.891012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:43.497 [2024-11-20 11:35:28.895989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.497 [2024-11-20 11:35:28.896034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.497 [2024-11-20 11:35:28.896048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:43.497 [2024-11-20 11:35:28.902506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.497 [2024-11-20 11:35:28.902548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.497 [2024-11-20 11:35:28.902562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:43.497 [2024-11-20 11:35:28.907820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.497 [2024-11-20 11:35:28.907858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.497 [2024-11-20 11:35:28.907871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:43.497 [2024-11-20 11:35:28.913419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.497 [2024-11-20 11:35:28.913467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.497 [2024-11-20 11:35:28.913480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:43.497 [2024-11-20 11:35:28.918475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.497 [2024-11-20 11:35:28.918526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.497 [2024-11-20 11:35:28.918540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:43.497 [2024-11-20 11:35:28.923585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.497 [2024-11-20 11:35:28.923640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.497 [2024-11-20 11:35:28.923653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:43.497 [2024-11-20 11:35:28.928843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.497 [2024-11-20 11:35:28.928884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.497 [2024-11-20 11:35:28.928897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:43.497 [2024-11-20 11:35:28.934157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.497 [2024-11-20 11:35:28.934195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.497 [2024-11-20 11:35:28.934208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:43.497 [2024-11-20 11:35:28.940639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.497 [2024-11-20 11:35:28.940677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.497 [2024-11-20 11:35:28.940690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:43.497 [2024-11-20 11:35:28.945657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.497 [2024-11-20 11:35:28.945688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.497 [2024-11-20 11:35:28.945702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:43.497 [2024-11-20 11:35:28.950039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.497 [2024-11-20 11:35:28.950075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.497 [2024-11-20 11:35:28.950089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:43.497 [2024-11-20 11:35:28.954980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.497 [2024-11-20 11:35:28.955017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.497 [2024-11-20 11:35:28.955032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:43.497 [2024-11-20 11:35:28.960677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.497 [2024-11-20 11:35:28.960720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.497 [2024-11-20 11:35:28.960735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:43.497 [2024-11-20 11:35:28.964654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.497 [2024-11-20 11:35:28.964695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.497 [2024-11-20 11:35:28.964709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:43.497 [2024-11-20 11:35:28.969131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.497 [2024-11-20 11:35:28.969169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.497 [2024-11-20 11:35:28.969183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:43.497 [2024-11-20 11:35:28.974281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.497 [2024-11-20 11:35:28.974322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.497 [2024-11-20 11:35:28.974336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:43.497 [2024-11-20 11:35:28.979478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.497 [2024-11-20 11:35:28.979518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.497 [2024-11-20 11:35:28.979532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:43.497 [2024-11-20 11:35:28.983131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.497 [2024-11-20 11:35:28.983166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.497 [2024-11-20 11:35:28.983179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:43.497 [2024-11-20 11:35:28.987284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.497 [2024-11-20 11:35:28.987322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.497 [2024-11-20 11:35:28.987336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:43.497 [2024-11-20 11:35:28.992185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.498 [2024-11-20 11:35:28.992224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.498 [2024-11-20 11:35:28.992239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:43.498 [2024-11-20 11:35:28.995641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.498 [2024-11-20 11:35:28.995678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.498 [2024-11-20 11:35:28.995691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:43.498 [2024-11-20 11:35:28.999720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.498 [2024-11-20 11:35:28.999774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.498 [2024-11-20 11:35:28.999789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:43.498 [2024-11-20 11:35:29.005025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.498 [2024-11-20 11:35:29.005069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.498 [2024-11-20 11:35:29.005083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:43.498 [2024-11-20 11:35:29.009453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.498 [2024-11-20 11:35:29.009491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.498 [2024-11-20 11:35:29.009505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:43.498 [2024-11-20 11:35:29.012270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.498 [2024-11-20 11:35:29.012305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.498 [2024-11-20 11:35:29.012319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:43.498 [2024-11-20 11:35:29.017586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.498 [2024-11-20 11:35:29.017637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.498 [2024-11-20 11:35:29.017652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:43.498 [2024-11-20 11:35:29.020766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.498 [2024-11-20 11:35:29.020805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.498 [2024-11-20 11:35:29.020818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:43.498 [2024-11-20 11:35:29.025178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.498 [2024-11-20 11:35:29.025216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.498 [2024-11-20 11:35:29.025230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:43.498 [2024-11-20 11:35:29.029949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.498 [2024-11-20 11:35:29.029987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.498 [2024-11-20 11:35:29.030001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:43.498 [2024-11-20 11:35:29.032805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.498 [2024-11-20 11:35:29.032842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.498 [2024-11-20 11:35:29.032855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:43.498 [2024-11-20 11:35:29.038301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.498 [2024-11-20 11:35:29.038351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.498 [2024-11-20 11:35:29.038367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:43.498 [2024-11-20 11:35:29.043169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.498 [2024-11-20 11:35:29.043228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.498 [2024-11-20 11:35:29.043243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:43.498 [2024-11-20 11:35:29.046520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.498 [2024-11-20 11:35:29.046572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.498 [2024-11-20 11:35:29.046588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:43.498 [2024-11-20 11:35:29.050801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.498 [2024-11-20 11:35:29.050845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.498 [2024-11-20 11:35:29.050859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:43.498 [2024-11-20 11:35:29.055405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.498 [2024-11-20 11:35:29.055446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.498 [2024-11-20 11:35:29.055460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:43.498 [2024-11-20 11:35:29.059153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.498 [2024-11-20 11:35:29.059191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.498 [2024-11-20 11:35:29.059204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:43.498 [2024-11-20 11:35:29.064021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.498 [2024-11-20 11:35:29.064058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.498 [2024-11-20 11:35:29.064072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:43.498 [2024-11-20 11:35:29.069340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.498 [2024-11-20 11:35:29.069378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.498 [2024-11-20 11:35:29.069393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:43.498 [2024-11-20 11:35:29.073945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.498 [2024-11-20 11:35:29.073983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.498 [2024-11-20 11:35:29.073997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:43.498 [2024-11-20 11:35:29.077165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.498 [2024-11-20 11:35:29.077203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.498 [2024-11-20 11:35:29.077216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:43.764 [2024-11-20 11:35:29.082266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.764 [2024-11-20 11:35:29.082322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.764 [2024-11-20 11:35:29.082337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:43.764 [2024-11-20 11:35:29.087521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.764 [2024-11-20 11:35:29.087579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.764 [2024-11-20 11:35:29.087595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:43.764 [2024-11-20 11:35:29.092848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.764 [2024-11-20 11:35:29.092910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.764 [2024-11-20 11:35:29.092925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:43.764 [2024-11-20 11:35:29.098199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.764 [2024-11-20 11:35:29.098269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.764 [2024-11-20 11:35:29.098288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:43.764 [2024-11-20 11:35:29.102018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.764 [2024-11-20 11:35:29.102065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.764 [2024-11-20 11:35:29.102079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:43.764 [2024-11-20 11:35:29.106569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.764 [2024-11-20 11:35:29.106637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.764 [2024-11-20 11:35:29.106652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:43.764 [2024-11-20 11:35:29.111481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.764 [2024-11-20 11:35:29.111520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.764 [2024-11-20 11:35:29.111534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:43.764 [2024-11-20 11:35:29.116401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.764 [2024-11-20 11:35:29.116439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.764 [2024-11-20 11:35:29.116453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:43.765 [2024-11-20 11:35:29.119460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.765 [2024-11-20 11:35:29.119497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.765 [2024-11-20 11:35:29.119512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:43.765 [2024-11-20 11:35:29.124460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.765 [2024-11-20 11:35:29.124498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.765 [2024-11-20 11:35:29.124511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:43.765 [2024-11-20 11:35:29.129706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.765 [2024-11-20 11:35:29.129758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.765 [2024-11-20 11:35:29.129773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:43.765 [2024-11-20 11:35:29.134639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.765 [2024-11-20 11:35:29.134677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.765 [2024-11-20 11:35:29.134692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:43.765 [2024-11-20 11:35:29.137868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.765 [2024-11-20 11:35:29.137906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.765 [2024-11-20 11:35:29.137920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:43.765 [2024-11-20 11:35:29.142746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.765 [2024-11-20 11:35:29.142782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.765 [2024-11-20 11:35:29.142796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:43.765 [2024-11-20 11:35:29.147583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.765 [2024-11-20 11:35:29.147634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.765 [2024-11-20 11:35:29.147649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:43.765 [2024-11-20 11:35:29.152676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.765 [2024-11-20 11:35:29.152710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.765 [2024-11-20 11:35:29.152723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:43.765 [2024-11-20 11:35:29.156402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.765 [2024-11-20 11:35:29.156440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.765 [2024-11-20 11:35:29.156453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:43.765 [2024-11-20 11:35:29.159907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.765 [2024-11-20 11:35:29.159953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.765 [2024-11-20 11:35:29.159967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:43.765 [2024-11-20 11:35:29.164161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.765 [2024-11-20 11:35:29.164200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.765 [2024-11-20 11:35:29.164215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:43.765 [2024-11-20 11:35:29.168417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.765 [2024-11-20 11:35:29.168455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.765 [2024-11-20 11:35:29.168468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:43.765 [2024-11-20 11:35:29.172015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.765 [2024-11-20 11:35:29.172052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.765 [2024-11-20 11:35:29.172065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:43.765 [2024-11-20 11:35:29.177395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.765 [2024-11-20 11:35:29.177434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.765 [2024-11-20 11:35:29.177448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:43.765 [2024-11-20 11:35:29.182754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.765 [2024-11-20 11:35:29.182791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.765 [2024-11-20 11:35:29.182805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:43.765 [2024-11-20 11:35:29.187833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.765 [2024-11-20 11:35:29.187889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.765 [2024-11-20 11:35:29.187904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:43.765 [2024-11-20 11:35:29.192526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.765 [2024-11-20 11:35:29.192579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.765 [2024-11-20 11:35:29.192594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:43.765 [2024-11-20 11:35:29.195791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.765 [2024-11-20 11:35:29.195836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.765 [2024-11-20 11:35:29.195851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:43.765 [2024-11-20 11:35:29.200335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.765 [2024-11-20 11:35:29.200390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.765 [2024-11-20 11:35:29.200404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:43.765 [2024-11-20 11:35:29.205478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.765 [2024-11-20 11:35:29.205536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.765 [2024-11-20 11:35:29.205551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:43.765 [2024-11-20 11:35:29.210419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.765 [2024-11-20 11:35:29.210459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.765 [2024-11-20 11:35:29.210485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:43.765 [2024-11-20 11:35:29.213836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.765 [2024-11-20 11:35:29.213874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.765 [2024-11-20 11:35:29.213888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:43.765 [2024-11-20 11:35:29.218507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.765 [2024-11-20 11:35:29.218546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.765 [2024-11-20 11:35:29.218559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:43.765 [2024-11-20 11:35:29.222960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.765 [2024-11-20 11:35:29.222998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.765 [2024-11-20 11:35:29.223011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:43.765 [2024-11-20 11:35:29.228204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.765 [2024-11-20 11:35:29.228244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.765 [2024-11-20 11:35:29.228266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:43.765 [2024-11-20 11:35:29.231939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.765 [2024-11-20 11:35:29.231975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.765 [2024-11-20 11:35:29.231988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:43.765 [2024-11-20 11:35:29.236596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.765 [2024-11-20 11:35:29.236646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.765 [2024-11-20 11:35:29.236661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:43.765 [2024-11-20 11:35:29.242002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.765 [2024-11-20 11:35:29.242041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.765 [2024-11-20 11:35:29.242055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:43.765 [2024-11-20 11:35:29.247226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.765 [2024-11-20 11:35:29.247289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.765 [2024-11-20 11:35:29.247313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:43.765 [2024-11-20 11:35:29.252049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.765 [2024-11-20 11:35:29.252090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.765 [2024-11-20 11:35:29.252103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:43.765 [2024-11-20 11:35:29.255315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.765 [2024-11-20 11:35:29.255361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.765 [2024-11-20 11:35:29.255375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:43.765 [2024-11-20 11:35:29.260947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.765 [2024-11-20 11:35:29.261015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.765 [2024-11-20 11:35:29.261031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:43.765 [2024-11-20 11:35:29.266098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.765 [2024-11-20 11:35:29.266162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.765 [2024-11-20 11:35:29.266176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:43.765 [2024-11-20 11:35:29.271578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.765 [2024-11-20 11:35:29.271643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.765 [2024-11-20 11:35:29.271659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:43.765 [2024-11-20 11:35:29.274840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.765 [2024-11-20 11:35:29.274876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.765 [2024-11-20 11:35:29.274890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:43.765 [2024-11-20 11:35:29.279513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.765 [2024-11-20 11:35:29.279552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.765 [2024-11-20 11:35:29.279567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:43.765 [2024-11-20 11:35:29.283972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.765 [2024-11-20 11:35:29.284011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.765 [2024-11-20 11:35:29.284025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:43.766 [2024-11-20 11:35:29.287314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.766 [2024-11-20 11:35:29.287352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.766 [2024-11-20 11:35:29.287366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:43.766 [2024-11-20 11:35:29.292445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.766 [2024-11-20 11:35:29.292488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.766 [2024-11-20 11:35:29.292503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:43.766 [2024-11-20 11:35:29.297985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.766 [2024-11-20 11:35:29.298029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.766 [2024-11-20 11:35:29.298044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:43.766 [2024-11-20 11:35:29.303465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.766 [2024-11-20 11:35:29.303503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.766 [2024-11-20 11:35:29.303517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:43.766 [2024-11-20 11:35:29.307805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.766 [2024-11-20 11:35:29.307843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.766 [2024-11-20 11:35:29.307858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:43.766 [2024-11-20 11:35:29.312030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.766 [2024-11-20 11:35:29.312068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.766 [2024-11-20 11:35:29.312082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:43.766 [2024-11-20 11:35:29.316604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.766 [2024-11-20 11:35:29.316655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.766 [2024-11-20 11:35:29.316669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:43.766 [2024-11-20 11:35:29.321168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.766 [2024-11-20 11:35:29.321205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.766 [2024-11-20 11:35:29.321219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:43.766 [2024-11-20 11:35:29.326699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.766 [2024-11-20 11:35:29.326732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.766 [2024-11-20 11:35:29.326746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:43.766 [2024-11-20 11:35:29.331192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.766 [2024-11-20 11:35:29.331228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.766 [2024-11-20 11:35:29.331241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:43.766 [2024-11-20 11:35:29.335923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.766 [2024-11-20 11:35:29.335961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.766 [2024-11-20 11:35:29.335975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:43.766 [2024-11-20 11:35:29.340348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.766 [2024-11-20 11:35:29.340387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.766 [2024-11-20 11:35:29.340401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:43.766 [2024-11-20 11:35:29.346086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:43.766 [2024-11-20 11:35:29.346122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.766 [2024-11-20 11:35:29.346136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:44.024 [2024-11-20 11:35:29.350365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:44.024 [2024-11-20 11:35:29.350402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.024 [2024-11-20 11:35:29.350416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:44.024 [2024-11-20 11:35:29.354599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:44.024 [2024-11-20 11:35:29.354646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.024 [2024-11-20 11:35:29.354660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:44.024 6488.00 IOPS, 811.00 MiB/s [2024-11-20T11:35:29.613Z] [2024-11-20 11:35:29.360225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c8d880) 00:22:44.024 [2024-11-20 11:35:29.360263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.024 [2024-11-20 11:35:29.360277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:44.024 00:22:44.024 Latency(us) 00:22:44.024 [2024-11-20T11:35:29.613Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:44.024 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:22:44.024 nvme0n1 : 2.00 6484.92 810.62 0.00 0.00 2462.36 651.64 13702.98 00:22:44.024 [2024-11-20T11:35:29.613Z] =================================================================================================================== 00:22:44.024 [2024-11-20T11:35:29.613Z] Total : 6484.92 810.62 0.00 0.00 2462.36 651.64 13702.98 00:22:44.024 { 00:22:44.024 "results": [ 00:22:44.024 { 00:22:44.024 "job": "nvme0n1", 00:22:44.024 "core_mask": "0x2", 00:22:44.024 "workload": "randread", 00:22:44.024 "status": "finished", 00:22:44.024 "queue_depth": 16, 00:22:44.024 "io_size": 131072, 00:22:44.024 "runtime": 2.003417, 00:22:44.024 "iops": 6484.920513303022, 00:22:44.024 "mibps": 810.6150641628777, 00:22:44.024 "io_failed": 0, 00:22:44.024 "io_timeout": 0, 00:22:44.025 "avg_latency_us": 2462.358441558442, 00:22:44.025 "min_latency_us": 651.6363636363636, 00:22:44.025 "max_latency_us": 13702.981818181817 00:22:44.025 } 00:22:44.025 ], 00:22:44.025 "core_count": 1 00:22:44.025 } 00:22:44.025 11:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:44.025 11:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:44.025 11:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:44.025 11:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:44.025 | .driver_specific 00:22:44.025 | .nvme_error 00:22:44.025 | .status_code 00:22:44.025 | .command_transient_transport_error' 00:22:44.283 11:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 420 > 0 )) 00:22:44.283 11:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 94186 00:22:44.284 11:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 94186 ']' 00:22:44.284 11:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 94186 00:22:44.284 11:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:22:44.284 11:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:44.284 11:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94186 00:22:44.284 11:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:44.284 11:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:44.284 killing process with pid 94186 00:22:44.284 11:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94186' 00:22:44.284 11:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 94186 00:22:44.284 Received shutdown signal, test time was about 2.000000 seconds 00:22:44.284 00:22:44.284 Latency(us) 00:22:44.284 [2024-11-20T11:35:29.873Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:44.284 [2024-11-20T11:35:29.873Z] =================================================================================================================== 00:22:44.284 [2024-11-20T11:35:29.873Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:44.284 11:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 94186 00:22:44.284 11:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:22:44.284 11:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:22:44.284 11:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:22:44.284 11:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:22:44.284 11:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:22:44.284 11:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=94260 00:22:44.284 11:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 94260 /var/tmp/bperf.sock 00:22:44.284 11:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:22:44.284 11:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 94260 ']' 00:22:44.284 11:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:44.284 11:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:44.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:44.284 11:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:44.284 11:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:44.284 11:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:44.543 [2024-11-20 11:35:29.885077] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:22:44.543 [2024-11-20 11:35:29.885204] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94260 ] 00:22:44.543 [2024-11-20 11:35:30.041581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:44.543 [2024-11-20 11:35:30.089319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:44.801 11:35:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:44.801 11:35:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:22:44.801 11:35:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:44.801 11:35:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:45.059 11:35:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:45.059 11:35:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.059 11:35:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:45.059 11:35:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.059 11:35:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:45.059 11:35:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:45.317 nvme0n1 00:22:45.317 11:35:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:22:45.317 11:35:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.317 11:35:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:45.317 11:35:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.317 11:35:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:45.317 11:35:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:45.576 Running I/O for 2 seconds... 00:22:45.576 [2024-11-20 11:35:31.037100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166f57b0 00:22:45.576 [2024-11-20 11:35:31.038274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:8349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.576 [2024-11-20 11:35:31.038323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:45.576 [2024-11-20 11:35:31.051894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166e4140 00:22:45.576 [2024-11-20 11:35:31.053703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:11918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.576 [2024-11-20 11:35:31.053760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:45.576 [2024-11-20 11:35:31.060672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166e0ea0 00:22:45.576 [2024-11-20 11:35:31.061535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:19579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.576 [2024-11-20 11:35:31.061580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:45.576 [2024-11-20 11:35:31.075883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166f2510 00:22:45.576 [2024-11-20 11:35:31.077406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:5552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.576 [2024-11-20 11:35:31.077450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:45.576 [2024-11-20 11:35:31.087333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166fbcf0 00:22:45.576 [2024-11-20 11:35:31.088598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:22507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.576 [2024-11-20 11:35:31.088654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:45.576 [2024-11-20 11:35:31.100145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166f35f0 00:22:45.577 [2024-11-20 11:35:31.101407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:10238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.577 [2024-11-20 11:35:31.101453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:45.577 [2024-11-20 11:35:31.115203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166e1f80 00:22:45.577 [2024-11-20 11:35:31.117097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.577 [2024-11-20 11:35:31.117140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:45.577 [2024-11-20 11:35:31.123914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166e3060 00:22:45.577 [2024-11-20 11:35:31.124836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:21880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.577 [2024-11-20 11:35:31.124876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:45.577 [2024-11-20 11:35:31.138675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166f46d0 00:22:45.577 [2024-11-20 11:35:31.140256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:14527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.577 [2024-11-20 11:35:31.140299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:45.577 [2024-11-20 11:35:31.150065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166f9b30 00:22:45.577 [2024-11-20 11:35:31.151432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:10804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.577 [2024-11-20 11:35:31.151476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:45.577 [2024-11-20 11:35:31.162078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166f1430 00:22:45.835 [2024-11-20 11:35:31.163378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:19562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.835 [2024-11-20 11:35:31.163420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:45.835 [2024-11-20 11:35:31.176894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166dfdc0 00:22:45.835 [2024-11-20 11:35:31.178878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.835 [2024-11-20 11:35:31.178920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:45.835 [2024-11-20 11:35:31.185677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166e5220 00:22:45.835 [2024-11-20 11:35:31.186683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:22290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.835 [2024-11-20 11:35:31.186722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:45.835 [2024-11-20 11:35:31.200484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166f6890 00:22:45.835 [2024-11-20 11:35:31.202317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.835 [2024-11-20 11:35:31.202367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:45.835 [2024-11-20 11:35:31.213142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166e49b0 00:22:45.835 [2024-11-20 11:35:31.214791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:23092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.835 [2024-11-20 11:35:31.214846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:45.835 [2024-11-20 11:35:31.225791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166f7da8 00:22:45.835 [2024-11-20 11:35:31.226886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:8983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.835 [2024-11-20 11:35:31.226929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:45.835 [2024-11-20 11:35:31.237453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166ff3c8 00:22:45.835 [2024-11-20 11:35:31.238388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:12427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.835 [2024-11-20 11:35:31.238448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:45.835 [2024-11-20 11:35:31.249713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166ee5c8 00:22:45.835 [2024-11-20 11:35:31.251077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.835 [2024-11-20 11:35:31.251121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:45.835 [2024-11-20 11:35:31.264556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166e99d8 00:22:45.835 [2024-11-20 11:35:31.266636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:20985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.835 [2024-11-20 11:35:31.266681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:45.835 [2024-11-20 11:35:31.273600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166e23b8 00:22:45.835 [2024-11-20 11:35:31.274721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.836 [2024-11-20 11:35:31.274780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:45.836 [2024-11-20 11:35:31.289445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166e49b0 00:22:45.836 [2024-11-20 11:35:31.291203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:18048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.836 [2024-11-20 11:35:31.291248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.836 [2024-11-20 11:35:31.298227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166feb58 00:22:45.836 [2024-11-20 11:35:31.299003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.836 [2024-11-20 11:35:31.299042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:45.836 [2024-11-20 11:35:31.313214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166f0788 00:22:45.836 [2024-11-20 11:35:31.314742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:10283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.836 [2024-11-20 11:35:31.314791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:45.836 [2024-11-20 11:35:31.324986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166ebb98 00:22:45.836 [2024-11-20 11:35:31.326214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:7985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.836 [2024-11-20 11:35:31.326261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:45.836 [2024-11-20 11:35:31.336946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166f7538 00:22:45.836 [2024-11-20 11:35:31.338098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:17403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.836 [2024-11-20 11:35:31.338138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:45.836 [2024-11-20 11:35:31.351690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166fa3a0 00:22:45.836 [2024-11-20 11:35:31.353486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:21049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.836 [2024-11-20 11:35:31.353528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:45.836 [2024-11-20 11:35:31.360362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166fcdd0 00:22:45.836 [2024-11-20 11:35:31.361207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.836 [2024-11-20 11:35:31.361244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:45.836 [2024-11-20 11:35:31.375467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166f2948 00:22:45.836 [2024-11-20 11:35:31.377132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:18888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.836 [2024-11-20 11:35:31.377181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:45.836 [2024-11-20 11:35:31.387510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166e7818 00:22:45.836 [2024-11-20 11:35:31.388787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:13235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.836 [2024-11-20 11:35:31.388826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:45.836 [2024-11-20 11:35:31.399471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166f96f8 00:22:45.836 [2024-11-20 11:35:31.400710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:20158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.836 [2024-11-20 11:35:31.400747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:45.836 [2024-11-20 11:35:31.414223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166fc560 00:22:45.836 [2024-11-20 11:35:31.416117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:7931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.836 [2024-11-20 11:35:31.416159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:46.095 [2024-11-20 11:35:31.422819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166e6b70 00:22:46.095 [2024-11-20 11:35:31.423567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.095 [2024-11-20 11:35:31.423605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:46.095 [2024-11-20 11:35:31.436964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166e1b48 00:22:46.095 [2024-11-20 11:35:31.438103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.095 [2024-11-20 11:35:31.438162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:46.095 [2024-11-20 11:35:31.449352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166fda78 00:22:46.095 [2024-11-20 11:35:31.450494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.095 [2024-11-20 11:35:31.450554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:46.095 [2024-11-20 11:35:31.461448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166df118 00:22:46.095 [2024-11-20 11:35:31.462809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:19432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.095 [2024-11-20 11:35:31.462878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:46.095 [2024-11-20 11:35:31.472438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166ebfd0 00:22:46.095 [2024-11-20 11:35:31.473243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:25569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.095 [2024-11-20 11:35:31.473304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:46.095 [2024-11-20 11:35:31.488177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166f9f68 00:22:46.095 [2024-11-20 11:35:31.489338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:8996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.095 [2024-11-20 11:35:31.489407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:46.095 [2024-11-20 11:35:31.498907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166ed0b0 00:22:46.095 [2024-11-20 11:35:31.499772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:6563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.095 [2024-11-20 11:35:31.499831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.095 [2024-11-20 11:35:31.510871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166eee38 00:22:46.095 [2024-11-20 11:35:31.511693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:25568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.095 [2024-11-20 11:35:31.511755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:46.095 [2024-11-20 11:35:31.522468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166ea248 00:22:46.095 [2024-11-20 11:35:31.523287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:12874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.095 [2024-11-20 11:35:31.523351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:46.095 [2024-11-20 11:35:31.537185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166f5378 00:22:46.095 [2024-11-20 11:35:31.538586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:6333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.095 [2024-11-20 11:35:31.538669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:46.095 [2024-11-20 11:35:31.551488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166fc998 00:22:46.095 [2024-11-20 11:35:31.553157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:6810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.095 [2024-11-20 11:35:31.553222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:46.095 [2024-11-20 11:35:31.563314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166f31b8 00:22:46.095 [2024-11-20 11:35:31.564390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:10944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.095 [2024-11-20 11:35:31.564449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:46.095 [2024-11-20 11:35:31.576074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166e5658 00:22:46.095 [2024-11-20 11:35:31.577227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:9691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.095 [2024-11-20 11:35:31.577293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:46.095 [2024-11-20 11:35:31.588126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166e4de8 00:22:46.095 [2024-11-20 11:35:31.589294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:11344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.095 [2024-11-20 11:35:31.589360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:46.095 [2024-11-20 11:35:31.601771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166e1710 00:22:46.095 [2024-11-20 11:35:31.603402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:12360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.095 [2024-11-20 11:35:31.603446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:46.095 [2024-11-20 11:35:31.613440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166ed920 00:22:46.095 [2024-11-20 11:35:31.614926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:10009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.095 [2024-11-20 11:35:31.614972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:46.096 [2024-11-20 11:35:31.625725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166fc560 00:22:46.096 [2024-11-20 11:35:31.627099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.096 [2024-11-20 11:35:31.627144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:46.096 [2024-11-20 11:35:31.640559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166e95a0 00:22:46.096 [2024-11-20 11:35:31.642575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:7429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.096 [2024-11-20 11:35:31.642627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:46.096 [2024-11-20 11:35:31.649338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166e27f0 00:22:46.096 [2024-11-20 11:35:31.650418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:12273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.096 [2024-11-20 11:35:31.650458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:46.096 [2024-11-20 11:35:31.664183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166df988 00:22:46.096 [2024-11-20 11:35:31.665724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.096 [2024-11-20 11:35:31.665783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:46.096 [2024-11-20 11:35:31.675791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166e0a68 00:22:46.096 [2024-11-20 11:35:31.677186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.096 [2024-11-20 11:35:31.677229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:46.354 [2024-11-20 11:35:31.687574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166de038 00:22:46.354 [2024-11-20 11:35:31.688862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:18868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.354 [2024-11-20 11:35:31.688902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:46.354 [2024-11-20 11:35:31.699236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166ea680 00:22:46.354 [2024-11-20 11:35:31.700332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:23562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.354 [2024-11-20 11:35:31.700375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:46.354 [2024-11-20 11:35:31.711951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166f57b0 00:22:46.354 [2024-11-20 11:35:31.713332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:22496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.354 [2024-11-20 11:35:31.713378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:46.354 [2024-11-20 11:35:31.723324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166e8d30 00:22:46.354 [2024-11-20 11:35:31.724444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.354 [2024-11-20 11:35:31.724486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:46.354 [2024-11-20 11:35:31.735239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166fa3a0 00:22:46.354 [2024-11-20 11:35:31.736341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:4834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.354 [2024-11-20 11:35:31.736383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:46.354 [2024-11-20 11:35:31.750039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166e8088 00:22:46.355 [2024-11-20 11:35:31.751820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.355 [2024-11-20 11:35:31.751862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:46.355 [2024-11-20 11:35:31.759162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166ddc00 00:22:46.355 [2024-11-20 11:35:31.760026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:18653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.355 [2024-11-20 11:35:31.760093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:46.355 [2024-11-20 11:35:31.776205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166de470 00:22:46.355 [2024-11-20 11:35:31.777592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:7908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.355 [2024-11-20 11:35:31.777674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:46.355 [2024-11-20 11:35:31.788604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166f7da8 00:22:46.355 [2024-11-20 11:35:31.789976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.355 [2024-11-20 11:35:31.790045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:46.355 [2024-11-20 11:35:31.801457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166f2d80 00:22:46.355 [2024-11-20 11:35:31.803134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:22853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.355 [2024-11-20 11:35:31.803200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:46.355 [2024-11-20 11:35:31.812372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166e1f80 00:22:46.355 [2024-11-20 11:35:31.813113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.355 [2024-11-20 11:35:31.813179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:46.355 [2024-11-20 11:35:31.825298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166f92c0 00:22:46.355 [2024-11-20 11:35:31.826285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.355 [2024-11-20 11:35:31.826351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:46.355 [2024-11-20 11:35:31.837331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166f92c0 00:22:46.355 [2024-11-20 11:35:31.838314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.355 [2024-11-20 11:35:31.838373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:46.355 [2024-11-20 11:35:31.851000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166f92c0 00:22:46.355 [2024-11-20 11:35:31.852640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:13118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.355 [2024-11-20 11:35:31.852709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:46.355 [2024-11-20 11:35:31.862283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166ebfd0 00:22:46.355 [2024-11-20 11:35:31.863216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.355 [2024-11-20 11:35:31.863286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:46.355 [2024-11-20 11:35:31.874399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166e9168 00:22:46.355 [2024-11-20 11:35:31.875225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:15403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.355 [2024-11-20 11:35:31.875290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:46.355 [2024-11-20 11:35:31.887945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166e88f8 00:22:46.355 [2024-11-20 11:35:31.889218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.355 [2024-11-20 11:35:31.889284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:46.355 [2024-11-20 11:35:31.902514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166eee38 00:22:46.355 [2024-11-20 11:35:31.903956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:15187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.355 [2024-11-20 11:35:31.904022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:46.355 [2024-11-20 11:35:31.916020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166fa7d8 00:22:46.355 [2024-11-20 11:35:31.917482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:5640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.355 [2024-11-20 11:35:31.917547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.355 [2024-11-20 11:35:31.929882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166e8088 00:22:46.355 [2024-11-20 11:35:31.931357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.355 [2024-11-20 11:35:31.931418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:46.614 [2024-11-20 11:35:31.941540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166ed4e8 00:22:46.614 [2024-11-20 11:35:31.942923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:7758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.614 [2024-11-20 11:35:31.942967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:46.614 [2024-11-20 11:35:31.953896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166f4298 00:22:46.614 [2024-11-20 11:35:31.955316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:14314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.614 [2024-11-20 11:35:31.955357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:46.614 [2024-11-20 11:35:31.969258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166e3d08 00:22:46.614 [2024-11-20 11:35:31.971307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:18970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.614 [2024-11-20 11:35:31.971347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:46.614 [2024-11-20 11:35:31.978248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166eb760 00:22:46.614 [2024-11-20 11:35:31.979240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.614 [2024-11-20 11:35:31.979277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:46.614 [2024-11-20 11:35:31.991034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166fc560 00:22:46.614 [2024-11-20 11:35:31.992105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:4353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.614 [2024-11-20 11:35:31.992143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:46.614 [2024-11-20 11:35:32.003101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166f4298 00:22:46.614 [2024-11-20 11:35:32.004004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:4244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.614 [2024-11-20 11:35:32.004042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:46.615 [2024-11-20 11:35:32.017892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166e9168 00:22:46.615 [2024-11-20 11:35:32.018953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.615 [2024-11-20 11:35:32.018995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:46.615 20169.00 IOPS, 78.79 MiB/s [2024-11-20T11:35:32.204Z] [2024-11-20 11:35:32.029585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166fdeb0 00:22:46.615 [2024-11-20 11:35:32.030416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:10403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.615 [2024-11-20 11:35:32.030455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:46.615 [2024-11-20 11:35:32.041425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166f3e60 00:22:46.615 [2024-11-20 11:35:32.042188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:17106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.615 [2024-11-20 11:35:32.042228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:46.615 [2024-11-20 11:35:32.055971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166f1430 00:22:46.615 [2024-11-20 11:35:32.057559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.615 [2024-11-20 11:35:32.057599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:46.615 [2024-11-20 11:35:32.067696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166f1430 00:22:46.615 [2024-11-20 11:35:32.069142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:6938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.615 [2024-11-20 11:35:32.069178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:46.615 [2024-11-20 11:35:32.079600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166e1710 00:22:46.615 [2024-11-20 11:35:32.080846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.615 [2024-11-20 11:35:32.080886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:46.615 [2024-11-20 11:35:32.091967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166ea680 00:22:46.615 [2024-11-20 11:35:32.093195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:16409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.615 [2024-11-20 11:35:32.093230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:46.615 [2024-11-20 11:35:32.106916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166fb048 00:22:46.615 [2024-11-20 11:35:32.108853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:16992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.615 [2024-11-20 11:35:32.108887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:46.615 [2024-11-20 11:35:32.115802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166fe720 00:22:46.615 [2024-11-20 11:35:32.116539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:16439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.615 [2024-11-20 11:35:32.116574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:46.615 [2024-11-20 11:35:32.131866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166e2c28 00:22:46.615 [2024-11-20 11:35:32.133697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:23691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.615 [2024-11-20 11:35:32.133752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:46.615 [2024-11-20 11:35:32.143804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166e0ea0 00:22:46.615 [2024-11-20 11:35:32.145357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:8924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.615 [2024-11-20 11:35:32.145400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.615 [2024-11-20 11:35:32.153762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166f4f40 00:22:46.615 [2024-11-20 11:35:32.154385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.615 [2024-11-20 11:35:32.154420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:46.615 [2024-11-20 11:35:32.166071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166f6890 00:22:46.615 [2024-11-20 11:35:32.167160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:15448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.615 [2024-11-20 11:35:32.167195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:46.615 [2024-11-20 11:35:32.180644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166f20d8 00:22:46.615 [2024-11-20 11:35:32.182437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:10076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.615 [2024-11-20 11:35:32.182479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:46.615 [2024-11-20 11:35:32.189545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166fdeb0 00:22:46.615 [2024-11-20 11:35:32.190420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.615 [2024-11-20 11:35:32.190462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:46.875 [2024-11-20 11:35:32.204423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166ebb98 00:22:46.875 [2024-11-20 11:35:32.205937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.875 [2024-11-20 11:35:32.205975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:46.875 [2024-11-20 11:35:32.215824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166e7c50 00:22:46.875 [2024-11-20 11:35:32.217058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:14922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.875 [2024-11-20 11:35:32.217097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:46.875 [2024-11-20 11:35:32.227755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166fe720 00:22:46.875 [2024-11-20 11:35:32.228932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:9962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.875 [2024-11-20 11:35:32.228969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:46.875 [2024-11-20 11:35:32.242384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166ed4e8 00:22:46.875 [2024-11-20 11:35:32.244225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:16744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.875 [2024-11-20 11:35:32.244261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:46.875 [2024-11-20 11:35:32.251046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166e5ec8 00:22:46.875 [2024-11-20 11:35:32.251922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:2389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.875 [2024-11-20 11:35:32.251957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:46.875 [2024-11-20 11:35:32.265694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166fef90 00:22:46.875 [2024-11-20 11:35:32.267241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.875 [2024-11-20 11:35:32.267279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:46.875 [2024-11-20 11:35:32.276980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166fa3a0 00:22:46.875 [2024-11-20 11:35:32.278270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:24978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.875 [2024-11-20 11:35:32.278307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:46.875 [2024-11-20 11:35:32.288834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166ef6a8 00:22:46.875 [2024-11-20 11:35:32.290132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:20399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.875 [2024-11-20 11:35:32.290173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:46.875 [2024-11-20 11:35:32.303856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166fd640 00:22:46.875 [2024-11-20 11:35:32.305800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.875 [2024-11-20 11:35:32.305840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:46.875 [2024-11-20 11:35:32.312503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166f2948 00:22:46.875 [2024-11-20 11:35:32.313453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.875 [2024-11-20 11:35:32.313486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:46.875 [2024-11-20 11:35:32.327351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166df550 00:22:46.875 [2024-11-20 11:35:32.329044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:2526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.875 [2024-11-20 11:35:32.329091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:46.875 [2024-11-20 11:35:32.339026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166e73e0 00:22:46.875 [2024-11-20 11:35:32.340413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:10353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.875 [2024-11-20 11:35:32.340454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:46.875 [2024-11-20 11:35:32.350999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166f6cc8 00:22:46.875 [2024-11-20 11:35:32.352329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:24869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.875 [2024-11-20 11:35:32.352365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:46.875 [2024-11-20 11:35:32.365659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166e6738 00:22:46.875 [2024-11-20 11:35:32.367679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:2422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.875 [2024-11-20 11:35:32.367717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:46.875 [2024-11-20 11:35:32.374775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166dece0 00:22:46.875 [2024-11-20 11:35:32.375666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:3210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.875 [2024-11-20 11:35:32.375701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:46.875 [2024-11-20 11:35:32.390553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166ddc00 00:22:46.875 [2024-11-20 11:35:32.392429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:3109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.875 [2024-11-20 11:35:32.392464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:46.875 [2024-11-20 11:35:32.402180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166e8d30 00:22:46.875 [2024-11-20 11:35:32.403868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:6569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.875 [2024-11-20 11:35:32.403901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:46.876 [2024-11-20 11:35:32.413728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166eee38 00:22:46.876 [2024-11-20 11:35:32.415270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.876 [2024-11-20 11:35:32.415303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:46.876 [2024-11-20 11:35:32.425241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166e6738 00:22:46.876 [2024-11-20 11:35:32.426682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:13953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.876 [2024-11-20 11:35:32.426720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:46.876 [2024-11-20 11:35:32.437063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166e23b8 00:22:46.876 [2024-11-20 11:35:32.438454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:20722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.876 [2024-11-20 11:35:32.438491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:46.876 [2024-11-20 11:35:32.448523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166e5658 00:22:46.876 [2024-11-20 11:35:32.449735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:18914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.876 [2024-11-20 11:35:32.449788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:46.876 [2024-11-20 11:35:32.460916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166e0630 00:22:47.146 [2024-11-20 11:35:32.462072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:4559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.146 [2024-11-20 11:35:32.462119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:47.146 [2024-11-20 11:35:32.476051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166e95a0 00:22:47.146 [2024-11-20 11:35:32.477839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:14048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.146 [2024-11-20 11:35:32.477887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:47.146 [2024-11-20 11:35:32.484828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166fdeb0 00:22:47.146 [2024-11-20 11:35:32.485640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.146 [2024-11-20 11:35:32.485677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:47.146 [2024-11-20 11:35:32.499839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166f6890 00:22:47.146 [2024-11-20 11:35:32.501381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:5953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.146 [2024-11-20 11:35:32.501425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:47.146 [2024-11-20 11:35:32.511431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166f6020 00:22:47.146 [2024-11-20 11:35:32.512681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:23603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.146 [2024-11-20 11:35:32.512720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:47.146 [2024-11-20 11:35:32.524427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166eb760 00:22:47.146 [2024-11-20 11:35:32.525984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:8432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.146 [2024-11-20 11:35:32.526031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:47.146 [2024-11-20 11:35:32.541777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166e0a68 00:22:47.146 [2024-11-20 11:35:32.543659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:2793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.146 [2024-11-20 11:35:32.543706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:47.146 [2024-11-20 11:35:32.550663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166f46d0 00:22:47.146 [2024-11-20 11:35:32.551534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:13930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.146 [2024-11-20 11:35:32.551572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:47.146 [2024-11-20 11:35:32.565411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166eaab8 00:22:47.146 [2024-11-20 11:35:32.566999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.146 [2024-11-20 11:35:32.567040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:47.146 [2024-11-20 11:35:32.576841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166e0ea0 00:22:47.146 [2024-11-20 11:35:32.578230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:21039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.146 [2024-11-20 11:35:32.578270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:47.146 [2024-11-20 11:35:32.588850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166e8088 00:22:47.146 [2024-11-20 11:35:32.590124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:1745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.146 [2024-11-20 11:35:32.590165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:47.146 [2024-11-20 11:35:32.603688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166fd640 00:22:47.146 [2024-11-20 11:35:32.605633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:15665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.146 [2024-11-20 11:35:32.605676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:47.146 [2024-11-20 11:35:32.612481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166f0bc0 00:22:47.146 [2024-11-20 11:35:32.613448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:21381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.146 [2024-11-20 11:35:32.613487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:47.146 [2024-11-20 11:35:32.627186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166f1868 00:22:47.146 [2024-11-20 11:35:32.628853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:12358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.146 [2024-11-20 11:35:32.628895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:47.146 [2024-11-20 11:35:32.638655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166e8d30 00:22:47.146 [2024-11-20 11:35:32.640131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:22866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.146 [2024-11-20 11:35:32.640177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:47.146 [2024-11-20 11:35:32.650610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166ed4e8 00:22:47.146 [2024-11-20 11:35:32.651955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.146 [2024-11-20 11:35:32.651997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:47.146 [2024-11-20 11:35:32.665452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166e6738 00:22:47.146 [2024-11-20 11:35:32.668068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:22979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.146 [2024-11-20 11:35:32.668116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:47.146 [2024-11-20 11:35:32.675043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166e38d0 00:22:47.146 [2024-11-20 11:35:32.676144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:2835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.146 [2024-11-20 11:35:32.676188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:47.146 [2024-11-20 11:35:32.690347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166e5658 00:22:47.146 [2024-11-20 11:35:32.692085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:13057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.146 [2024-11-20 11:35:32.692129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:47.146 [2024-11-20 11:35:32.699083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166e49b0 00:22:47.146 [2024-11-20 11:35:32.699847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:17110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.146 [2024-11-20 11:35:32.699886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.147 [2024-11-20 11:35:32.713792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166f5be8 00:22:47.147 [2024-11-20 11:35:32.715226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:7613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.147 [2024-11-20 11:35:32.715267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:47.441 [2024-11-20 11:35:32.727148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166fdeb0 00:22:47.441 [2024-11-20 11:35:32.728008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.441 [2024-11-20 11:35:32.728055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:47.441 [2024-11-20 11:35:32.742062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166ed0b0 00:22:47.441 [2024-11-20 11:35:32.743486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.441 [2024-11-20 11:35:32.743530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:47.441 [2024-11-20 11:35:32.757271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166f6020 00:22:47.441 [2024-11-20 11:35:32.759123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:18762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.441 [2024-11-20 11:35:32.759167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:47.441 [2024-11-20 11:35:32.766161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166ea680 00:22:47.441 [2024-11-20 11:35:32.767011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:14293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.441 [2024-11-20 11:35:32.767054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:47.441 [2024-11-20 11:35:32.781078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166e6fa8 00:22:47.441 [2024-11-20 11:35:32.782758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:6442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.441 [2024-11-20 11:35:32.782803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:47.441 [2024-11-20 11:35:32.794005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166fe720 00:22:47.441 [2024-11-20 11:35:32.795536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:1211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.441 [2024-11-20 11:35:32.795590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:47.441 [2024-11-20 11:35:32.805706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166e5a90 00:22:47.441 [2024-11-20 11:35:32.807091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:20107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.441 [2024-11-20 11:35:32.807133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:47.441 [2024-11-20 11:35:32.817244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166e8088 00:22:47.441 [2024-11-20 11:35:32.818443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:18200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.441 [2024-11-20 11:35:32.818484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:47.441 [2024-11-20 11:35:32.828836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166eb760 00:22:47.441 [2024-11-20 11:35:32.829898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.441 [2024-11-20 11:35:32.829937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:47.441 [2024-11-20 11:35:32.844147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166eee38 00:22:47.441 [2024-11-20 11:35:32.846083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:19046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.441 [2024-11-20 11:35:32.846134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:47.441 [2024-11-20 11:35:32.853187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166ea248 00:22:47.441 [2024-11-20 11:35:32.854098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.441 [2024-11-20 11:35:32.854150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:47.441 [2024-11-20 11:35:32.868377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166e4140 00:22:47.441 [2024-11-20 11:35:32.869653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:13127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.441 [2024-11-20 11:35:32.869691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:47.441 [2024-11-20 11:35:32.881070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166f2d80 00:22:47.441 [2024-11-20 11:35:32.882609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:16142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.441 [2024-11-20 11:35:32.882658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:47.441 [2024-11-20 11:35:32.892453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166ea248 00:22:47.441 [2024-11-20 11:35:32.893779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:23165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.441 [2024-11-20 11:35:32.893821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:47.441 [2024-11-20 11:35:32.904356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166f9f68 00:22:47.441 [2024-11-20 11:35:32.905600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:9645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.441 [2024-11-20 11:35:32.905647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:47.441 [2024-11-20 11:35:32.919006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166df550 00:22:47.441 [2024-11-20 11:35:32.920916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.441 [2024-11-20 11:35:32.920953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:47.441 [2024-11-20 11:35:32.927588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166dfdc0 00:22:47.441 [2024-11-20 11:35:32.928378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:17360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.441 [2024-11-20 11:35:32.928416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:47.441 [2024-11-20 11:35:32.942974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166efae0 00:22:47.442 [2024-11-20 11:35:32.944745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:13834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.442 [2024-11-20 11:35:32.944782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:47.442 [2024-11-20 11:35:32.951837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166fc560 00:22:47.442 [2024-11-20 11:35:32.952942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:5285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.442 [2024-11-20 11:35:32.952980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:47.442 [2024-11-20 11:35:32.968130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166ee190 00:22:47.442 [2024-11-20 11:35:32.969736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:18646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.442 [2024-11-20 11:35:32.969786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:47.442 [2024-11-20 11:35:32.979669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166e84c0 00:22:47.442 [2024-11-20 11:35:32.981100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:6748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.442 [2024-11-20 11:35:32.981138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:47.442 [2024-11-20 11:35:32.991196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166eb328 00:22:47.442 [2024-11-20 11:35:32.992575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.442 [2024-11-20 11:35:32.992626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:47.442 [2024-11-20 11:35:33.003139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166de8a8 00:22:47.442 [2024-11-20 11:35:33.004420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:3223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.442 [2024-11-20 11:35:33.004457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:47.442 [2024-11-20 11:35:33.017817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd2e40) with pdu=0x2000166f7538 00:22:47.442 [2024-11-20 11:35:33.021381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:22309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.442 [2024-11-20 11:35:33.021421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:47.442 20268.00 IOPS, 79.17 MiB/s 00:22:47.442 Latency(us) 00:22:47.442 [2024-11-20T11:35:33.031Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:47.442 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:47.442 nvme0n1 : 2.00 20294.02 79.27 0.00 0.00 6300.60 2517.18 16801.05 00:22:47.442 [2024-11-20T11:35:33.031Z] =================================================================================================================== 00:22:47.442 [2024-11-20T11:35:33.031Z] Total : 20294.02 79.27 0.00 0.00 6300.60 2517.18 16801.05 00:22:47.442 { 00:22:47.442 "results": [ 00:22:47.442 { 00:22:47.442 "job": "nvme0n1", 00:22:47.442 "core_mask": "0x2", 00:22:47.442 "workload": "randwrite", 00:22:47.442 "status": "finished", 00:22:47.442 "queue_depth": 128, 00:22:47.442 "io_size": 4096, 00:22:47.442 "runtime": 2.003743, 00:22:47.442 "iops": 20294.019742052747, 00:22:47.442 "mibps": 79.27351461739354, 00:22:47.442 "io_failed": 0, 00:22:47.442 "io_timeout": 0, 00:22:47.442 "avg_latency_us": 6300.595831380896, 00:22:47.442 "min_latency_us": 2517.1781818181817, 00:22:47.442 "max_latency_us": 16801.04727272727 00:22:47.442 } 00:22:47.442 ], 00:22:47.442 "core_count": 1 00:22:47.442 } 00:22:47.700 11:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:47.700 11:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:47.700 | .driver_specific 00:22:47.700 | .nvme_error 00:22:47.700 | .status_code 00:22:47.700 | .command_transient_transport_error' 00:22:47.700 11:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:47.700 11:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:47.959 11:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 159 > 0 )) 00:22:47.959 11:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 94260 00:22:47.959 11:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 94260 ']' 00:22:47.959 11:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 94260 00:22:47.959 11:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:22:47.959 11:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:47.959 11:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94260 00:22:47.959 11:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:47.959 11:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:47.959 11:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94260' 00:22:47.959 killing process with pid 94260 00:22:47.959 11:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 94260 00:22:47.959 Received shutdown signal, test time was about 2.000000 seconds 00:22:47.959 00:22:47.959 Latency(us) 00:22:47.959 [2024-11-20T11:35:33.548Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:47.959 [2024-11-20T11:35:33.548Z] =================================================================================================================== 00:22:47.959 [2024-11-20T11:35:33.548Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:47.959 11:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 94260 00:22:47.959 11:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:22:47.959 11:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:22:47.959 11:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:22:47.959 11:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:22:47.959 11:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:22:47.959 11:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=94331 00:22:47.959 11:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:22:47.959 11:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 94331 /var/tmp/bperf.sock 00:22:47.959 11:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 94331 ']' 00:22:47.959 11:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:47.959 11:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:47.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:47.959 11:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:47.959 11:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:47.959 11:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:48.217 [2024-11-20 11:35:33.568427] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:22:48.217 [2024-11-20 11:35:33.568530] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94331 ] 00:22:48.217 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:48.217 Zero copy mechanism will not be used. 00:22:48.217 [2024-11-20 11:35:33.713506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:48.217 [2024-11-20 11:35:33.748158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:48.476 11:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:48.476 11:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:22:48.476 11:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:48.476 11:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:48.734 11:35:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:48.734 11:35:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.734 11:35:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:48.734 11:35:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.734 11:35:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:48.734 11:35:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:49.302 nvme0n1 00:22:49.302 11:35:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:22:49.302 11:35:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.302 11:35:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:49.302 11:35:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.302 11:35:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:49.302 11:35:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:49.302 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:49.302 Zero copy mechanism will not be used. 00:22:49.302 Running I/O for 2 seconds... 00:22:49.302 [2024-11-20 11:35:34.779760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.302 [2024-11-20 11:35:34.779889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.302 [2024-11-20 11:35:34.779921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:49.302 [2024-11-20 11:35:34.785329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.302 [2024-11-20 11:35:34.785451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.302 [2024-11-20 11:35:34.785482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:49.302 [2024-11-20 11:35:34.790642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.302 [2024-11-20 11:35:34.790720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.302 [2024-11-20 11:35:34.790746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:49.302 [2024-11-20 11:35:34.795944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.302 [2024-11-20 11:35:34.796047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.302 [2024-11-20 11:35:34.796072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:49.302 [2024-11-20 11:35:34.801282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.302 [2024-11-20 11:35:34.801402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.302 [2024-11-20 11:35:34.801431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:49.302 [2024-11-20 11:35:34.807500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.302 [2024-11-20 11:35:34.807639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.302 [2024-11-20 11:35:34.807670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:49.302 [2024-11-20 11:35:34.813212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.302 [2024-11-20 11:35:34.813401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.302 [2024-11-20 11:35:34.813434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:49.302 [2024-11-20 11:35:34.819793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.302 [2024-11-20 11:35:34.819927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.302 [2024-11-20 11:35:34.819959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:49.302 [2024-11-20 11:35:34.825440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.302 [2024-11-20 11:35:34.825571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.302 [2024-11-20 11:35:34.825601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:49.302 [2024-11-20 11:35:34.831820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.302 [2024-11-20 11:35:34.831999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.302 [2024-11-20 11:35:34.832028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:49.302 [2024-11-20 11:35:34.837561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.302 [2024-11-20 11:35:34.837712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.302 [2024-11-20 11:35:34.837754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:49.302 [2024-11-20 11:35:34.843181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.302 [2024-11-20 11:35:34.843311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.302 [2024-11-20 11:35:34.843341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:49.302 [2024-11-20 11:35:34.849941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.302 [2024-11-20 11:35:34.850077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.302 [2024-11-20 11:35:34.850118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:49.302 [2024-11-20 11:35:34.855576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.302 [2024-11-20 11:35:34.855720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.302 [2024-11-20 11:35:34.855749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:49.302 [2024-11-20 11:35:34.861091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.302 [2024-11-20 11:35:34.861230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.302 [2024-11-20 11:35:34.861259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:49.302 [2024-11-20 11:35:34.866694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.302 [2024-11-20 11:35:34.866822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.302 [2024-11-20 11:35:34.866850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:49.303 [2024-11-20 11:35:34.872171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.303 [2024-11-20 11:35:34.872299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.303 [2024-11-20 11:35:34.872329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:49.303 [2024-11-20 11:35:34.877638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.303 [2024-11-20 11:35:34.877775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.303 [2024-11-20 11:35:34.877804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:49.303 [2024-11-20 11:35:34.883002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.303 [2024-11-20 11:35:34.883132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.303 [2024-11-20 11:35:34.883162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:49.562 [2024-11-20 11:35:34.889320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.562 [2024-11-20 11:35:34.889488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.562 [2024-11-20 11:35:34.889518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:49.562 [2024-11-20 11:35:34.897491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.562 [2024-11-20 11:35:34.897678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.562 [2024-11-20 11:35:34.897708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:49.562 [2024-11-20 11:35:34.905728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.562 [2024-11-20 11:35:34.905900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.562 [2024-11-20 11:35:34.905930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:49.562 [2024-11-20 11:35:34.913892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.562 [2024-11-20 11:35:34.914063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.562 [2024-11-20 11:35:34.914094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:49.562 [2024-11-20 11:35:34.921918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.562 [2024-11-20 11:35:34.922077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.562 [2024-11-20 11:35:34.922111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:49.562 [2024-11-20 11:35:34.930080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.562 [2024-11-20 11:35:34.930281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.562 [2024-11-20 11:35:34.930327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:49.562 [2024-11-20 11:35:34.938298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.562 [2024-11-20 11:35:34.938490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.562 [2024-11-20 11:35:34.938539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:49.562 [2024-11-20 11:35:34.946013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.562 [2024-11-20 11:35:34.946199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.562 [2024-11-20 11:35:34.946244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:49.562 [2024-11-20 11:35:34.954150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.562 [2024-11-20 11:35:34.954321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.562 [2024-11-20 11:35:34.954367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:49.562 [2024-11-20 11:35:34.962081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.562 [2024-11-20 11:35:34.962253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.562 [2024-11-20 11:35:34.962296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:49.562 [2024-11-20 11:35:34.970177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.562 [2024-11-20 11:35:34.970377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.562 [2024-11-20 11:35:34.970421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:49.562 [2024-11-20 11:35:34.978142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.562 [2024-11-20 11:35:34.978320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.562 [2024-11-20 11:35:34.978365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:49.562 [2024-11-20 11:35:34.986170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.562 [2024-11-20 11:35:34.986357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.562 [2024-11-20 11:35:34.986403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:49.562 [2024-11-20 11:35:34.994110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.562 [2024-11-20 11:35:34.994303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.562 [2024-11-20 11:35:34.994348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:49.562 [2024-11-20 11:35:35.002242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.562 [2024-11-20 11:35:35.002443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.563 [2024-11-20 11:35:35.002493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:49.563 [2024-11-20 11:35:35.010128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.563 [2024-11-20 11:35:35.010324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.563 [2024-11-20 11:35:35.010373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:49.563 [2024-11-20 11:35:35.017868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.563 [2024-11-20 11:35:35.018066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.563 [2024-11-20 11:35:35.018110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:49.563 [2024-11-20 11:35:35.025783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.563 [2024-11-20 11:35:35.025989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.563 [2024-11-20 11:35:35.026035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:49.563 [2024-11-20 11:35:35.033634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.563 [2024-11-20 11:35:35.033834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.563 [2024-11-20 11:35:35.033878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:49.563 [2024-11-20 11:35:35.041497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.563 [2024-11-20 11:35:35.041703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.563 [2024-11-20 11:35:35.041766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:49.563 [2024-11-20 11:35:35.049550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.563 [2024-11-20 11:35:35.049781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.563 [2024-11-20 11:35:35.049825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:49.563 [2024-11-20 11:35:35.056955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.563 [2024-11-20 11:35:35.057087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.563 [2024-11-20 11:35:35.057123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:49.563 [2024-11-20 11:35:35.064512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.563 [2024-11-20 11:35:35.064706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.563 [2024-11-20 11:35:35.064751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:49.563 [2024-11-20 11:35:35.072333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.563 [2024-11-20 11:35:35.072517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.563 [2024-11-20 11:35:35.072561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:49.563 [2024-11-20 11:35:35.080249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.563 [2024-11-20 11:35:35.080447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.563 [2024-11-20 11:35:35.080489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:49.563 [2024-11-20 11:35:35.088138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.563 [2024-11-20 11:35:35.088323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.563 [2024-11-20 11:35:35.088367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:49.563 [2024-11-20 11:35:35.096039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.563 [2024-11-20 11:35:35.096214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.563 [2024-11-20 11:35:35.096256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:49.563 [2024-11-20 11:35:35.104027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.563 [2024-11-20 11:35:35.104198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.563 [2024-11-20 11:35:35.104240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:49.563 [2024-11-20 11:35:35.111973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.563 [2024-11-20 11:35:35.112152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.563 [2024-11-20 11:35:35.112193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:49.563 [2024-11-20 11:35:35.119939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.563 [2024-11-20 11:35:35.120104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.563 [2024-11-20 11:35:35.120145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:49.563 [2024-11-20 11:35:35.128003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.563 [2024-11-20 11:35:35.128176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.563 [2024-11-20 11:35:35.128218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:49.563 [2024-11-20 11:35:35.136035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.563 [2024-11-20 11:35:35.136201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.563 [2024-11-20 11:35:35.136245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:49.563 [2024-11-20 11:35:35.144153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.563 [2024-11-20 11:35:35.144320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.563 [2024-11-20 11:35:35.144365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:49.823 [2024-11-20 11:35:35.152126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.823 [2024-11-20 11:35:35.152271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.823 [2024-11-20 11:35:35.152314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:49.823 [2024-11-20 11:35:35.160208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.823 [2024-11-20 11:35:35.160368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.823 [2024-11-20 11:35:35.160407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:49.823 [2024-11-20 11:35:35.168236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.823 [2024-11-20 11:35:35.168443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.823 [2024-11-20 11:35:35.168488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:49.823 [2024-11-20 11:35:35.176442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.823 [2024-11-20 11:35:35.176641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.823 [2024-11-20 11:35:35.176690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:49.823 [2024-11-20 11:35:35.184655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.823 [2024-11-20 11:35:35.184844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.823 [2024-11-20 11:35:35.184900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:49.823 [2024-11-20 11:35:35.192835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.823 [2024-11-20 11:35:35.193029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.823 [2024-11-20 11:35:35.193078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:49.823 [2024-11-20 11:35:35.201107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.823 [2024-11-20 11:35:35.201293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.823 [2024-11-20 11:35:35.201340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:49.823 [2024-11-20 11:35:35.209427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.823 [2024-11-20 11:35:35.209662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.823 [2024-11-20 11:35:35.209707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:49.823 [2024-11-20 11:35:35.217808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.823 [2024-11-20 11:35:35.217972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.823 [2024-11-20 11:35:35.218015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:49.823 [2024-11-20 11:35:35.226007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.823 [2024-11-20 11:35:35.226191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.823 [2024-11-20 11:35:35.226236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:49.823 [2024-11-20 11:35:35.234325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.823 [2024-11-20 11:35:35.234519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.823 [2024-11-20 11:35:35.234564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:49.823 [2024-11-20 11:35:35.242594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.823 [2024-11-20 11:35:35.242821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.823 [2024-11-20 11:35:35.242867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:49.823 [2024-11-20 11:35:35.251176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.823 [2024-11-20 11:35:35.251376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.823 [2024-11-20 11:35:35.251421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:49.823 [2024-11-20 11:35:35.259555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.823 [2024-11-20 11:35:35.259775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.823 [2024-11-20 11:35:35.259820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:49.823 [2024-11-20 11:35:35.268303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.823 [2024-11-20 11:35:35.268503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.823 [2024-11-20 11:35:35.268547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:49.823 [2024-11-20 11:35:35.276173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.823 [2024-11-20 11:35:35.276357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.823 [2024-11-20 11:35:35.276400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:49.823 [2024-11-20 11:35:35.284310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.823 [2024-11-20 11:35:35.284478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.823 [2024-11-20 11:35:35.284535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:49.823 [2024-11-20 11:35:35.292922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.823 [2024-11-20 11:35:35.293108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.823 [2024-11-20 11:35:35.293152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:49.823 [2024-11-20 11:35:35.301218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.823 [2024-11-20 11:35:35.301409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.824 [2024-11-20 11:35:35.301453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:49.824 [2024-11-20 11:35:35.309445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.824 [2024-11-20 11:35:35.309659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.824 [2024-11-20 11:35:35.309704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:49.824 [2024-11-20 11:35:35.316556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.824 [2024-11-20 11:35:35.316822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.824 [2024-11-20 11:35:35.316876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:49.824 [2024-11-20 11:35:35.322077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.824 [2024-11-20 11:35:35.322268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.824 [2024-11-20 11:35:35.322315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:49.824 [2024-11-20 11:35:35.327539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.824 [2024-11-20 11:35:35.327824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.824 [2024-11-20 11:35:35.327873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:49.824 [2024-11-20 11:35:35.333074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.824 [2024-11-20 11:35:35.333266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.824 [2024-11-20 11:35:35.333313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:49.824 [2024-11-20 11:35:35.338552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.824 [2024-11-20 11:35:35.338862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.824 [2024-11-20 11:35:35.338926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:49.824 [2024-11-20 11:35:35.343987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.824 [2024-11-20 11:35:35.344230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.824 [2024-11-20 11:35:35.344277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:49.824 [2024-11-20 11:35:35.349445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.824 [2024-11-20 11:35:35.349729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.824 [2024-11-20 11:35:35.349792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:49.824 [2024-11-20 11:35:35.354864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.824 [2024-11-20 11:35:35.355145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.824 [2024-11-20 11:35:35.355192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:49.824 [2024-11-20 11:35:35.360218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.824 [2024-11-20 11:35:35.360509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.824 [2024-11-20 11:35:35.360558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:49.824 [2024-11-20 11:35:35.365577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.824 [2024-11-20 11:35:35.365881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.824 [2024-11-20 11:35:35.365931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:49.824 [2024-11-20 11:35:35.370996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.824 [2024-11-20 11:35:35.371237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.824 [2024-11-20 11:35:35.371277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:49.824 [2024-11-20 11:35:35.375838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.824 [2024-11-20 11:35:35.376453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.824 [2024-11-20 11:35:35.376515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:49.824 [2024-11-20 11:35:35.381032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.824 [2024-11-20 11:35:35.381591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.824 [2024-11-20 11:35:35.381675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:49.824 [2024-11-20 11:35:35.386118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.824 [2024-11-20 11:35:35.386674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.824 [2024-11-20 11:35:35.386735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:49.824 [2024-11-20 11:35:35.391196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.824 [2024-11-20 11:35:35.391649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.824 [2024-11-20 11:35:35.391709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:49.824 [2024-11-20 11:35:35.396333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.824 [2024-11-20 11:35:35.396861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.824 [2024-11-20 11:35:35.396924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:49.824 [2024-11-20 11:35:35.401603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.824 [2024-11-20 11:35:35.402159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.824 [2024-11-20 11:35:35.402209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:49.824 [2024-11-20 11:35:35.406879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:49.824 [2024-11-20 11:35:35.407219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.824 [2024-11-20 11:35:35.407256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.083 [2024-11-20 11:35:35.411913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.083 [2024-11-20 11:35:35.412251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.083 [2024-11-20 11:35:35.412284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.083 [2024-11-20 11:35:35.417034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.083 [2024-11-20 11:35:35.417380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.083 [2024-11-20 11:35:35.417412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.083 [2024-11-20 11:35:35.422200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.083 [2024-11-20 11:35:35.422532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.083 [2024-11-20 11:35:35.422566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.083 [2024-11-20 11:35:35.427348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.083 [2024-11-20 11:35:35.427711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.083 [2024-11-20 11:35:35.427746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.083 [2024-11-20 11:35:35.432451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.083 [2024-11-20 11:35:35.432816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.083 [2024-11-20 11:35:35.432850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.083 [2024-11-20 11:35:35.437637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.083 [2024-11-20 11:35:35.437989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.084 [2024-11-20 11:35:35.438024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.084 [2024-11-20 11:35:35.442755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.084 [2024-11-20 11:35:35.443092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.084 [2024-11-20 11:35:35.443125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.084 [2024-11-20 11:35:35.447856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.084 [2024-11-20 11:35:35.448189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.084 [2024-11-20 11:35:35.448221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.084 [2024-11-20 11:35:35.452944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.084 [2024-11-20 11:35:35.453276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.084 [2024-11-20 11:35:35.453309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.084 [2024-11-20 11:35:35.458045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.084 [2024-11-20 11:35:35.458380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.084 [2024-11-20 11:35:35.458413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.084 [2024-11-20 11:35:35.463200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.084 [2024-11-20 11:35:35.463525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.084 [2024-11-20 11:35:35.463558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.084 [2024-11-20 11:35:35.468231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.084 [2024-11-20 11:35:35.468561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.084 [2024-11-20 11:35:35.468594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.084 [2024-11-20 11:35:35.473315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.084 [2024-11-20 11:35:35.473660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.084 [2024-11-20 11:35:35.473693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.084 [2024-11-20 11:35:35.478403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.084 [2024-11-20 11:35:35.478756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.084 [2024-11-20 11:35:35.478790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.084 [2024-11-20 11:35:35.483545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.084 [2024-11-20 11:35:35.483912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.084 [2024-11-20 11:35:35.483947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.084 [2024-11-20 11:35:35.488793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.084 [2024-11-20 11:35:35.489161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.084 [2024-11-20 11:35:35.489195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.084 [2024-11-20 11:35:35.494033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.084 [2024-11-20 11:35:35.494411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.084 [2024-11-20 11:35:35.494446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.084 [2024-11-20 11:35:35.499246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.084 [2024-11-20 11:35:35.499603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.084 [2024-11-20 11:35:35.499652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.084 [2024-11-20 11:35:35.504377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.084 [2024-11-20 11:35:35.504746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.084 [2024-11-20 11:35:35.504782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.084 [2024-11-20 11:35:35.509738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.084 [2024-11-20 11:35:35.510109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.084 [2024-11-20 11:35:35.510149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.084 [2024-11-20 11:35:35.514919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.084 [2024-11-20 11:35:35.515271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.084 [2024-11-20 11:35:35.515307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.084 [2024-11-20 11:35:35.520050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.084 [2024-11-20 11:35:35.520382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.084 [2024-11-20 11:35:35.520419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.084 [2024-11-20 11:35:35.525185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.084 [2024-11-20 11:35:35.525520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.084 [2024-11-20 11:35:35.525555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.084 [2024-11-20 11:35:35.530233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.084 [2024-11-20 11:35:35.530561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.084 [2024-11-20 11:35:35.530596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.084 [2024-11-20 11:35:35.535335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.084 [2024-11-20 11:35:35.535684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.084 [2024-11-20 11:35:35.535718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.084 [2024-11-20 11:35:35.540476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.084 [2024-11-20 11:35:35.540830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.084 [2024-11-20 11:35:35.540869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.084 [2024-11-20 11:35:35.545541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.084 [2024-11-20 11:35:35.545942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.084 [2024-11-20 11:35:35.545977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.084 [2024-11-20 11:35:35.550681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.084 [2024-11-20 11:35:35.551016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.084 [2024-11-20 11:35:35.551050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.084 [2024-11-20 11:35:35.555732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.084 [2024-11-20 11:35:35.556067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.084 [2024-11-20 11:35:35.556104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.084 [2024-11-20 11:35:35.560897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.084 [2024-11-20 11:35:35.561222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.084 [2024-11-20 11:35:35.561256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.084 [2024-11-20 11:35:35.566012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.084 [2024-11-20 11:35:35.566347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.084 [2024-11-20 11:35:35.566387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.084 [2024-11-20 11:35:35.571133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.084 [2024-11-20 11:35:35.571462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.084 [2024-11-20 11:35:35.571495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.084 [2024-11-20 11:35:35.576228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.084 [2024-11-20 11:35:35.576561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.084 [2024-11-20 11:35:35.576601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.084 [2024-11-20 11:35:35.581366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.084 [2024-11-20 11:35:35.581724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.084 [2024-11-20 11:35:35.581776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.084 [2024-11-20 11:35:35.586474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.084 [2024-11-20 11:35:35.586828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.084 [2024-11-20 11:35:35.586868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.084 [2024-11-20 11:35:35.591526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.084 [2024-11-20 11:35:35.591873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.084 [2024-11-20 11:35:35.591912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.084 [2024-11-20 11:35:35.596628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.084 [2024-11-20 11:35:35.596970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.085 [2024-11-20 11:35:35.597008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.085 [2024-11-20 11:35:35.601669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.085 [2024-11-20 11:35:35.602025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.085 [2024-11-20 11:35:35.602065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.085 [2024-11-20 11:35:35.606759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.085 [2024-11-20 11:35:35.607093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.085 [2024-11-20 11:35:35.607132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.085 [2024-11-20 11:35:35.611774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.085 [2024-11-20 11:35:35.612101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.085 [2024-11-20 11:35:35.612141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.085 [2024-11-20 11:35:35.616836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.085 [2024-11-20 11:35:35.617170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.085 [2024-11-20 11:35:35.617202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.085 [2024-11-20 11:35:35.622001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.085 [2024-11-20 11:35:35.622337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.085 [2024-11-20 11:35:35.622373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.085 [2024-11-20 11:35:35.627109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.085 [2024-11-20 11:35:35.627437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.085 [2024-11-20 11:35:35.627470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.085 [2024-11-20 11:35:35.632138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.085 [2024-11-20 11:35:35.632469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.085 [2024-11-20 11:35:35.632502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.085 [2024-11-20 11:35:35.637216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.085 [2024-11-20 11:35:35.637549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.085 [2024-11-20 11:35:35.637585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.085 [2024-11-20 11:35:35.642366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.085 [2024-11-20 11:35:35.642715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.085 [2024-11-20 11:35:35.642747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.085 [2024-11-20 11:35:35.647398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.085 [2024-11-20 11:35:35.647741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.085 [2024-11-20 11:35:35.647778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.085 [2024-11-20 11:35:35.652438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.085 [2024-11-20 11:35:35.652783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.085 [2024-11-20 11:35:35.652815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.085 [2024-11-20 11:35:35.658762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.085 [2024-11-20 11:35:35.659048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.085 [2024-11-20 11:35:35.659081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.085 [2024-11-20 11:35:35.665292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.085 [2024-11-20 11:35:35.665585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.085 [2024-11-20 11:35:35.665629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.344 [2024-11-20 11:35:35.672294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.344 [2024-11-20 11:35:35.672502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.344 [2024-11-20 11:35:35.672529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.344 [2024-11-20 11:35:35.679439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.344 [2024-11-20 11:35:35.679652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.344 [2024-11-20 11:35:35.679685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.344 [2024-11-20 11:35:35.686478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.344 [2024-11-20 11:35:35.686705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.344 [2024-11-20 11:35:35.686735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.344 [2024-11-20 11:35:35.693479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.344 [2024-11-20 11:35:35.693769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.344 [2024-11-20 11:35:35.693803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.344 [2024-11-20 11:35:35.700348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.344 [2024-11-20 11:35:35.700596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.344 [2024-11-20 11:35:35.700642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.344 [2024-11-20 11:35:35.706979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.344 [2024-11-20 11:35:35.707285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.344 [2024-11-20 11:35:35.707320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.344 [2024-11-20 11:35:35.713789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.344 [2024-11-20 11:35:35.714003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.344 [2024-11-20 11:35:35.714037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.344 [2024-11-20 11:35:35.720232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.344 [2024-11-20 11:35:35.720509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.344 [2024-11-20 11:35:35.720542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.344 [2024-11-20 11:35:35.726815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.344 [2024-11-20 11:35:35.727037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.344 [2024-11-20 11:35:35.727067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.344 [2024-11-20 11:35:35.733330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.344 [2024-11-20 11:35:35.733610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.344 [2024-11-20 11:35:35.733657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.344 [2024-11-20 11:35:35.739872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.344 [2024-11-20 11:35:35.740153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.344 [2024-11-20 11:35:35.740189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.344 [2024-11-20 11:35:35.746459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.344 [2024-11-20 11:35:35.746699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.344 [2024-11-20 11:35:35.746727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.344 [2024-11-20 11:35:35.752892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.344 [2024-11-20 11:35:35.753146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.344 [2024-11-20 11:35:35.753180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.344 [2024-11-20 11:35:35.759351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.344 [2024-11-20 11:35:35.759563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.344 [2024-11-20 11:35:35.759595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.344 [2024-11-20 11:35:35.765804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.344 [2024-11-20 11:35:35.766040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.344 [2024-11-20 11:35:35.766067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.344 [2024-11-20 11:35:35.772303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.344 [2024-11-20 11:35:35.772565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.344 [2024-11-20 11:35:35.772598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.344 4826.00 IOPS, 603.25 MiB/s [2024-11-20T11:35:35.933Z] [2024-11-20 11:35:35.779734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.344 [2024-11-20 11:35:35.779986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.344 [2024-11-20 11:35:35.780020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.344 [2024-11-20 11:35:35.786767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.344 [2024-11-20 11:35:35.786967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.344 [2024-11-20 11:35:35.786995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.344 [2024-11-20 11:35:35.792011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.344 [2024-11-20 11:35:35.792248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.344 [2024-11-20 11:35:35.792281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.344 [2024-11-20 11:35:35.797137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.344 [2024-11-20 11:35:35.797382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.344 [2024-11-20 11:35:35.797416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.344 [2024-11-20 11:35:35.803917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.344 [2024-11-20 11:35:35.804182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.344 [2024-11-20 11:35:35.804216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.344 [2024-11-20 11:35:35.811121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.344 [2024-11-20 11:35:35.811369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.344 [2024-11-20 11:35:35.811403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.344 [2024-11-20 11:35:35.818166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.344 [2024-11-20 11:35:35.818424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.344 [2024-11-20 11:35:35.818460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.344 [2024-11-20 11:35:35.825281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.344 [2024-11-20 11:35:35.825464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.344 [2024-11-20 11:35:35.825492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.344 [2024-11-20 11:35:35.832327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.344 [2024-11-20 11:35:35.832499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.344 [2024-11-20 11:35:35.832527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.345 [2024-11-20 11:35:35.839381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.345 [2024-11-20 11:35:35.839518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.345 [2024-11-20 11:35:35.839546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.345 [2024-11-20 11:35:35.846506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.345 [2024-11-20 11:35:35.846777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.345 [2024-11-20 11:35:35.846806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.345 [2024-11-20 11:35:35.853550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.345 [2024-11-20 11:35:35.853784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.345 [2024-11-20 11:35:35.853813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.345 [2024-11-20 11:35:35.860643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.345 [2024-11-20 11:35:35.860885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.345 [2024-11-20 11:35:35.860913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.345 [2024-11-20 11:35:35.867743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.345 [2024-11-20 11:35:35.867994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.345 [2024-11-20 11:35:35.868022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.345 [2024-11-20 11:35:35.874861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.345 [2024-11-20 11:35:35.875116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.345 [2024-11-20 11:35:35.875144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.345 [2024-11-20 11:35:35.882592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.345 [2024-11-20 11:35:35.882781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.345 [2024-11-20 11:35:35.882808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.345 [2024-11-20 11:35:35.889723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.345 [2024-11-20 11:35:35.889906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.345 [2024-11-20 11:35:35.889933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.345 [2024-11-20 11:35:35.896832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.345 [2024-11-20 11:35:35.897097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.345 [2024-11-20 11:35:35.897124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.345 [2024-11-20 11:35:35.901966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.345 [2024-11-20 11:35:35.902253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.345 [2024-11-20 11:35:35.902287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.345 [2024-11-20 11:35:35.906696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.345 [2024-11-20 11:35:35.906937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.345 [2024-11-20 11:35:35.906969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.345 [2024-11-20 11:35:35.911385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.345 [2024-11-20 11:35:35.911644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.345 [2024-11-20 11:35:35.911671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.345 [2024-11-20 11:35:35.916120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.345 [2024-11-20 11:35:35.916339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.345 [2024-11-20 11:35:35.916365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.345 [2024-11-20 11:35:35.920788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.345 [2024-11-20 11:35:35.921028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.345 [2024-11-20 11:35:35.921055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.345 [2024-11-20 11:35:35.925778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.345 [2024-11-20 11:35:35.926006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.345 [2024-11-20 11:35:35.926032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.606 [2024-11-20 11:35:35.931080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.606 [2024-11-20 11:35:35.931305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.606 [2024-11-20 11:35:35.931339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.606 [2024-11-20 11:35:35.936003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.606 [2024-11-20 11:35:35.936257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.606 [2024-11-20 11:35:35.936291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.606 [2024-11-20 11:35:35.940788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.606 [2024-11-20 11:35:35.941008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.606 [2024-11-20 11:35:35.941041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.606 [2024-11-20 11:35:35.945458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.606 [2024-11-20 11:35:35.945690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.606 [2024-11-20 11:35:35.945722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.606 [2024-11-20 11:35:35.950235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.606 [2024-11-20 11:35:35.950454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.606 [2024-11-20 11:35:35.950486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.606 [2024-11-20 11:35:35.955020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.606 [2024-11-20 11:35:35.955242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.606 [2024-11-20 11:35:35.955274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.606 [2024-11-20 11:35:35.959793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.606 [2024-11-20 11:35:35.960081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.606 [2024-11-20 11:35:35.960114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.606 [2024-11-20 11:35:35.964559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.606 [2024-11-20 11:35:35.964777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.606 [2024-11-20 11:35:35.964813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.606 [2024-11-20 11:35:35.969433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.606 [2024-11-20 11:35:35.969635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.606 [2024-11-20 11:35:35.969666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.606 [2024-11-20 11:35:35.974279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.606 [2024-11-20 11:35:35.974484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.606 [2024-11-20 11:35:35.974509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.606 [2024-11-20 11:35:35.979023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.606 [2024-11-20 11:35:35.979220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.606 [2024-11-20 11:35:35.979254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.606 [2024-11-20 11:35:35.983766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.606 [2024-11-20 11:35:35.983958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.606 [2024-11-20 11:35:35.983991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.606 [2024-11-20 11:35:35.988483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.606 [2024-11-20 11:35:35.988667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.606 [2024-11-20 11:35:35.988700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.606 [2024-11-20 11:35:35.993212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.606 [2024-11-20 11:35:35.993408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.606 [2024-11-20 11:35:35.993441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.606 [2024-11-20 11:35:35.997922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.606 [2024-11-20 11:35:35.998199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.606 [2024-11-20 11:35:35.998233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.606 [2024-11-20 11:35:36.002736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.606 [2024-11-20 11:35:36.002937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.606 [2024-11-20 11:35:36.002971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.606 [2024-11-20 11:35:36.007473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.606 [2024-11-20 11:35:36.007698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.606 [2024-11-20 11:35:36.007732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.606 [2024-11-20 11:35:36.012283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.606 [2024-11-20 11:35:36.012421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.606 [2024-11-20 11:35:36.012447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.606 [2024-11-20 11:35:36.016994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.606 [2024-11-20 11:35:36.017161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.606 [2024-11-20 11:35:36.017187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.606 [2024-11-20 11:35:36.021768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.606 [2024-11-20 11:35:36.021929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.606 [2024-11-20 11:35:36.021956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.606 [2024-11-20 11:35:36.026519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.606 [2024-11-20 11:35:36.026721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.606 [2024-11-20 11:35:36.026748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.606 [2024-11-20 11:35:36.031254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.606 [2024-11-20 11:35:36.031468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.606 [2024-11-20 11:35:36.031502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.606 [2024-11-20 11:35:36.036026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.606 [2024-11-20 11:35:36.036189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.606 [2024-11-20 11:35:36.036230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.606 [2024-11-20 11:35:36.040695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.606 [2024-11-20 11:35:36.040913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.606 [2024-11-20 11:35:36.040941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.606 [2024-11-20 11:35:36.045535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.606 [2024-11-20 11:35:36.045806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.606 [2024-11-20 11:35:36.045836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.607 [2024-11-20 11:35:36.050379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.607 [2024-11-20 11:35:36.050637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.607 [2024-11-20 11:35:36.050671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.607 [2024-11-20 11:35:36.055130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.607 [2024-11-20 11:35:36.055361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.607 [2024-11-20 11:35:36.055393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.607 [2024-11-20 11:35:36.059905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.607 [2024-11-20 11:35:36.060109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.607 [2024-11-20 11:35:36.060142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.607 [2024-11-20 11:35:36.064654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.607 [2024-11-20 11:35:36.064863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.607 [2024-11-20 11:35:36.064895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.607 [2024-11-20 11:35:36.069299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.607 [2024-11-20 11:35:36.069500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.607 [2024-11-20 11:35:36.069533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.607 [2024-11-20 11:35:36.074131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.607 [2024-11-20 11:35:36.074284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.607 [2024-11-20 11:35:36.074318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.607 [2024-11-20 11:35:36.078816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.607 [2024-11-20 11:35:36.079080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.607 [2024-11-20 11:35:36.079113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.607 [2024-11-20 11:35:36.083498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.607 [2024-11-20 11:35:36.083762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.607 [2024-11-20 11:35:36.083795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.607 [2024-11-20 11:35:36.088235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.607 [2024-11-20 11:35:36.088436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.607 [2024-11-20 11:35:36.088470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.607 [2024-11-20 11:35:36.092976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.607 [2024-11-20 11:35:36.093144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.607 [2024-11-20 11:35:36.093172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.607 [2024-11-20 11:35:36.098043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.607 [2024-11-20 11:35:36.098207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.607 [2024-11-20 11:35:36.098238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.607 [2024-11-20 11:35:36.103056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.607 [2024-11-20 11:35:36.103291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.607 [2024-11-20 11:35:36.103327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.607 [2024-11-20 11:35:36.107812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.607 [2024-11-20 11:35:36.108108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.607 [2024-11-20 11:35:36.108144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.607 [2024-11-20 11:35:36.112562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.607 [2024-11-20 11:35:36.112783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.607 [2024-11-20 11:35:36.112819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.607 [2024-11-20 11:35:36.117360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.607 [2024-11-20 11:35:36.117530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.607 [2024-11-20 11:35:36.117558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.607 [2024-11-20 11:35:36.122167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.607 [2024-11-20 11:35:36.122366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.607 [2024-11-20 11:35:36.122395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.607 [2024-11-20 11:35:36.126955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.607 [2024-11-20 11:35:36.127182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.607 [2024-11-20 11:35:36.127219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.607 [2024-11-20 11:35:36.131985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.607 [2024-11-20 11:35:36.132213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.607 [2024-11-20 11:35:36.132248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.607 [2024-11-20 11:35:36.137084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.607 [2024-11-20 11:35:36.137270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.607 [2024-11-20 11:35:36.137301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.607 [2024-11-20 11:35:36.141888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.607 [2024-11-20 11:35:36.142044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.607 [2024-11-20 11:35:36.142072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.607 [2024-11-20 11:35:36.146652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.607 [2024-11-20 11:35:36.146914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.607 [2024-11-20 11:35:36.146950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.607 [2024-11-20 11:35:36.151400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.607 [2024-11-20 11:35:36.151677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.607 [2024-11-20 11:35:36.151713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.607 [2024-11-20 11:35:36.156231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.607 [2024-11-20 11:35:36.156386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.607 [2024-11-20 11:35:36.156413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.607 [2024-11-20 11:35:36.161018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.607 [2024-11-20 11:35:36.161185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.607 [2024-11-20 11:35:36.161213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.607 [2024-11-20 11:35:36.165850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.607 [2024-11-20 11:35:36.166034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.607 [2024-11-20 11:35:36.166086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.607 [2024-11-20 11:35:36.170693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.607 [2024-11-20 11:35:36.170861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.607 [2024-11-20 11:35:36.170897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.607 [2024-11-20 11:35:36.175375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.607 [2024-11-20 11:35:36.175593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.608 [2024-11-20 11:35:36.175641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.608 [2024-11-20 11:35:36.180234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.608 [2024-11-20 11:35:36.180407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.608 [2024-11-20 11:35:36.180436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.608 [2024-11-20 11:35:36.184830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.608 [2024-11-20 11:35:36.184999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.608 [2024-11-20 11:35:36.185029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.608 [2024-11-20 11:35:36.189806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.608 [2024-11-20 11:35:36.190024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.608 [2024-11-20 11:35:36.190061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.868 [2024-11-20 11:35:36.194645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.868 [2024-11-20 11:35:36.194850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.868 [2024-11-20 11:35:36.194889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.868 [2024-11-20 11:35:36.199496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.868 [2024-11-20 11:35:36.199790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.868 [2024-11-20 11:35:36.199829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.868 [2024-11-20 11:35:36.204332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.868 [2024-11-20 11:35:36.204526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.868 [2024-11-20 11:35:36.204555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.868 [2024-11-20 11:35:36.209039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.868 [2024-11-20 11:35:36.209270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.868 [2024-11-20 11:35:36.209312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.868 [2024-11-20 11:35:36.213907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.868 [2024-11-20 11:35:36.214056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.868 [2024-11-20 11:35:36.214086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.868 [2024-11-20 11:35:36.218715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.868 [2024-11-20 11:35:36.218885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.868 [2024-11-20 11:35:36.218915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.868 [2024-11-20 11:35:36.223404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.868 [2024-11-20 11:35:36.223604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.868 [2024-11-20 11:35:36.223649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.868 [2024-11-20 11:35:36.228188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.868 [2024-11-20 11:35:36.228494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.868 [2024-11-20 11:35:36.228532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.868 [2024-11-20 11:35:36.232978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.868 [2024-11-20 11:35:36.233206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.868 [2024-11-20 11:35:36.233243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.868 [2024-11-20 11:35:36.237697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.868 [2024-11-20 11:35:36.237962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.868 [2024-11-20 11:35:36.237997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.868 [2024-11-20 11:35:36.242465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.868 [2024-11-20 11:35:36.242692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.868 [2024-11-20 11:35:36.242720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.868 [2024-11-20 11:35:36.247158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.868 [2024-11-20 11:35:36.247373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.868 [2024-11-20 11:35:36.247401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.868 [2024-11-20 11:35:36.251983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.868 [2024-11-20 11:35:36.252212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.868 [2024-11-20 11:35:36.252246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.868 [2024-11-20 11:35:36.256649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.868 [2024-11-20 11:35:36.256870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.868 [2024-11-20 11:35:36.256904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.868 [2024-11-20 11:35:36.261452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.868 [2024-11-20 11:35:36.261668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.868 [2024-11-20 11:35:36.261706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.868 [2024-11-20 11:35:36.266230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.868 [2024-11-20 11:35:36.266437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.868 [2024-11-20 11:35:36.266472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.868 [2024-11-20 11:35:36.271008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.868 [2024-11-20 11:35:36.271172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.868 [2024-11-20 11:35:36.271200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.868 [2024-11-20 11:35:36.275697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.868 [2024-11-20 11:35:36.275885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.868 [2024-11-20 11:35:36.275910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.868 [2024-11-20 11:35:36.280399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.868 [2024-11-20 11:35:36.280651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.868 [2024-11-20 11:35:36.280677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.868 [2024-11-20 11:35:36.285091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.868 [2024-11-20 11:35:36.285325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.868 [2024-11-20 11:35:36.285350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.868 [2024-11-20 11:35:36.289926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.868 [2024-11-20 11:35:36.290089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.868 [2024-11-20 11:35:36.290124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.868 [2024-11-20 11:35:36.294557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.868 [2024-11-20 11:35:36.294828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.868 [2024-11-20 11:35:36.294856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.868 [2024-11-20 11:35:36.299321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.868 [2024-11-20 11:35:36.299516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.868 [2024-11-20 11:35:36.299542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.868 [2024-11-20 11:35:36.304048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.868 [2024-11-20 11:35:36.304292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.868 [2024-11-20 11:35:36.304325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.869 [2024-11-20 11:35:36.308849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.869 [2024-11-20 11:35:36.309051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.869 [2024-11-20 11:35:36.309085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.869 [2024-11-20 11:35:36.313638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.869 [2024-11-20 11:35:36.313864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.869 [2024-11-20 11:35:36.313892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.869 [2024-11-20 11:35:36.318445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.869 [2024-11-20 11:35:36.318683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.869 [2024-11-20 11:35:36.318708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.869 [2024-11-20 11:35:36.323163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.869 [2024-11-20 11:35:36.323412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.869 [2024-11-20 11:35:36.323445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.869 [2024-11-20 11:35:36.327863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.869 [2024-11-20 11:35:36.328083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.869 [2024-11-20 11:35:36.328117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.869 [2024-11-20 11:35:36.332452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.869 [2024-11-20 11:35:36.332709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.869 [2024-11-20 11:35:36.332742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.869 [2024-11-20 11:35:36.337160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.869 [2024-11-20 11:35:36.337423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.869 [2024-11-20 11:35:36.337456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.869 [2024-11-20 11:35:36.341931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.869 [2024-11-20 11:35:36.342142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.869 [2024-11-20 11:35:36.342168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.869 [2024-11-20 11:35:36.346549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.869 [2024-11-20 11:35:36.346798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.869 [2024-11-20 11:35:36.346824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.869 [2024-11-20 11:35:36.351264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.869 [2024-11-20 11:35:36.351414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.869 [2024-11-20 11:35:36.351440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.869 [2024-11-20 11:35:36.355943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.869 [2024-11-20 11:35:36.356186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.869 [2024-11-20 11:35:36.356211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.869 [2024-11-20 11:35:36.360577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.869 [2024-11-20 11:35:36.360847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.869 [2024-11-20 11:35:36.360878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.869 [2024-11-20 11:35:36.365331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.869 [2024-11-20 11:35:36.365532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.869 [2024-11-20 11:35:36.365563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.869 [2024-11-20 11:35:36.370196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.869 [2024-11-20 11:35:36.370360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.869 [2024-11-20 11:35:36.370387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.869 [2024-11-20 11:35:36.374984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.869 [2024-11-20 11:35:36.375151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.869 [2024-11-20 11:35:36.375178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.869 [2024-11-20 11:35:36.379737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.869 [2024-11-20 11:35:36.379923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.869 [2024-11-20 11:35:36.379957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.869 [2024-11-20 11:35:36.384420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.869 [2024-11-20 11:35:36.384677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.869 [2024-11-20 11:35:36.384709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.869 [2024-11-20 11:35:36.389360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.869 [2024-11-20 11:35:36.389552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.869 [2024-11-20 11:35:36.389585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.869 [2024-11-20 11:35:36.394427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.869 [2024-11-20 11:35:36.394600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.869 [2024-11-20 11:35:36.394643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.869 [2024-11-20 11:35:36.399159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.869 [2024-11-20 11:35:36.399372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.869 [2024-11-20 11:35:36.399407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.869 [2024-11-20 11:35:36.404055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.869 [2024-11-20 11:35:36.404263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.869 [2024-11-20 11:35:36.404298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.869 [2024-11-20 11:35:36.409064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.869 [2024-11-20 11:35:36.409272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.869 [2024-11-20 11:35:36.409307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.869 [2024-11-20 11:35:36.414127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.869 [2024-11-20 11:35:36.414404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.869 [2024-11-20 11:35:36.414435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.869 [2024-11-20 11:35:36.418888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.869 [2024-11-20 11:35:36.419099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.869 [2024-11-20 11:35:36.419129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.869 [2024-11-20 11:35:36.423543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.869 [2024-11-20 11:35:36.423774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.869 [2024-11-20 11:35:36.423807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.869 [2024-11-20 11:35:36.428290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.869 [2024-11-20 11:35:36.428443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.869 [2024-11-20 11:35:36.428477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.869 [2024-11-20 11:35:36.433010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.869 [2024-11-20 11:35:36.433227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.869 [2024-11-20 11:35:36.433260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.870 [2024-11-20 11:35:36.437791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.870 [2024-11-20 11:35:36.437951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.870 [2024-11-20 11:35:36.437984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:50.870 [2024-11-20 11:35:36.442490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.870 [2024-11-20 11:35:36.442756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.870 [2024-11-20 11:35:36.442791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:50.870 [2024-11-20 11:35:36.447172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.870 [2024-11-20 11:35:36.447422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.870 [2024-11-20 11:35:36.447457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:50.870 [2024-11-20 11:35:36.452044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:50.870 [2024-11-20 11:35:36.452200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.870 [2024-11-20 11:35:36.452228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.129 [2024-11-20 11:35:36.456874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:51.129 [2024-11-20 11:35:36.457092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.129 [2024-11-20 11:35:36.457121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:51.129 [2024-11-20 11:35:36.461692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:51.129 [2024-11-20 11:35:36.461867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.129 [2024-11-20 11:35:36.461895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:51.129 [2024-11-20 11:35:36.466471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:51.129 [2024-11-20 11:35:36.466677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.129 [2024-11-20 11:35:36.466704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.129 [2024-11-20 11:35:36.471221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:51.130 [2024-11-20 11:35:36.471429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.130 [2024-11-20 11:35:36.471458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.130 [2024-11-20 11:35:36.476053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:51.130 [2024-11-20 11:35:36.476276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.130 [2024-11-20 11:35:36.476304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:51.130 [2024-11-20 11:35:36.480752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:51.130 [2024-11-20 11:35:36.480971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.130 [2024-11-20 11:35:36.481005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:51.130 [2024-11-20 11:35:36.485530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:51.130 [2024-11-20 11:35:36.485823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.130 [2024-11-20 11:35:36.485857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.130 [2024-11-20 11:35:36.490313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:51.130 [2024-11-20 11:35:36.490533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.130 [2024-11-20 11:35:36.490567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.130 [2024-11-20 11:35:36.495132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:51.130 [2024-11-20 11:35:36.495359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.130 [2024-11-20 11:35:36.495395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:51.130 [2024-11-20 11:35:36.499956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:51.130 [2024-11-20 11:35:36.500134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.130 [2024-11-20 11:35:36.500169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:51.130 [2024-11-20 11:35:36.504740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:51.130 [2024-11-20 11:35:36.504930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.130 [2024-11-20 11:35:36.504958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.130 [2024-11-20 11:35:36.509446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:51.130 [2024-11-20 11:35:36.509601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.130 [2024-11-20 11:35:36.509644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.130 [2024-11-20 11:35:36.514275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:51.130 [2024-11-20 11:35:36.514487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.130 [2024-11-20 11:35:36.514521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:51.130 [2024-11-20 11:35:36.518969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:51.130 [2024-11-20 11:35:36.519139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.130 [2024-11-20 11:35:36.519172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:51.130 [2024-11-20 11:35:36.523725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:51.130 [2024-11-20 11:35:36.523942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.130 [2024-11-20 11:35:36.523975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.130 [2024-11-20 11:35:36.528423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:51.130 [2024-11-20 11:35:36.528685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.130 [2024-11-20 11:35:36.528718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.130 [2024-11-20 11:35:36.533166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:51.130 [2024-11-20 11:35:36.533382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.130 [2024-11-20 11:35:36.533415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:51.130 [2024-11-20 11:35:36.537896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:51.130 [2024-11-20 11:35:36.538112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.130 [2024-11-20 11:35:36.538146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:51.130 [2024-11-20 11:35:36.542647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:51.130 [2024-11-20 11:35:36.542871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.130 [2024-11-20 11:35:36.542904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.130 [2024-11-20 11:35:36.547300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:51.130 [2024-11-20 11:35:36.547474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.130 [2024-11-20 11:35:36.547500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.130 [2024-11-20 11:35:36.552063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:51.130 [2024-11-20 11:35:36.552221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.130 [2024-11-20 11:35:36.552247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:51.130 [2024-11-20 11:35:36.556845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:51.130 [2024-11-20 11:35:36.557039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.130 [2024-11-20 11:35:36.557072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:51.130 [2024-11-20 11:35:36.561583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:51.130 [2024-11-20 11:35:36.561862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.130 [2024-11-20 11:35:36.561888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.130 [2024-11-20 11:35:36.566378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:51.130 [2024-11-20 11:35:36.566561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.130 [2024-11-20 11:35:36.566587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.130 [2024-11-20 11:35:36.571117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:51.130 [2024-11-20 11:35:36.571305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.130 [2024-11-20 11:35:36.571330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:51.130 [2024-11-20 11:35:36.575897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:51.131 [2024-11-20 11:35:36.576061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.131 [2024-11-20 11:35:36.576087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:51.131 [2024-11-20 11:35:36.580604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:51.131 [2024-11-20 11:35:36.580784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.131 [2024-11-20 11:35:36.580810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.131 [2024-11-20 11:35:36.585339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:51.131 [2024-11-20 11:35:36.585556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.131 [2024-11-20 11:35:36.585582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.131 [2024-11-20 11:35:36.590083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:51.131 [2024-11-20 11:35:36.590332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.131 [2024-11-20 11:35:36.590358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:51.131 [2024-11-20 11:35:36.594878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:51.131 [2024-11-20 11:35:36.595060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.131 [2024-11-20 11:35:36.595086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:51.131 [2024-11-20 11:35:36.599557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:51.131 [2024-11-20 11:35:36.599838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.131 [2024-11-20 11:35:36.599873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.131 [2024-11-20 11:35:36.604411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:51.131 [2024-11-20 11:35:36.604593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.131 [2024-11-20 11:35:36.604637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.131 [2024-11-20 11:35:36.609439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:51.131 [2024-11-20 11:35:36.609633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.131 [2024-11-20 11:35:36.609659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:51.131 [2024-11-20 11:35:36.614221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:51.131 [2024-11-20 11:35:36.614383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.131 [2024-11-20 11:35:36.614408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:51.131 [2024-11-20 11:35:36.618995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:51.131 [2024-11-20 11:35:36.619197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.131 [2024-11-20 11:35:36.619222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.131 [2024-11-20 11:35:36.623724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:51.131 [2024-11-20 11:35:36.623942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.131 [2024-11-20 11:35:36.623986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.131 [2024-11-20 11:35:36.628472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:51.131 [2024-11-20 11:35:36.628649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.131 [2024-11-20 11:35:36.628674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:51.131 [2024-11-20 11:35:36.633252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:51.131 [2024-11-20 11:35:36.633413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.131 [2024-11-20 11:35:36.633438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:51.131 [2024-11-20 11:35:36.637988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:51.131 [2024-11-20 11:35:36.638237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.131 [2024-11-20 11:35:36.638269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.131 [2024-11-20 11:35:36.642735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:51.131 [2024-11-20 11:35:36.642920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.131 [2024-11-20 11:35:36.642945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.131 [2024-11-20 11:35:36.647488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:51.131 [2024-11-20 11:35:36.647715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.131 [2024-11-20 11:35:36.647742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:51.131 [2024-11-20 11:35:36.652303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:51.131 [2024-11-20 11:35:36.652499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.131 [2024-11-20 11:35:36.652525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:51.131 [2024-11-20 11:35:36.657081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:51.131 [2024-11-20 11:35:36.657241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.131 [2024-11-20 11:35:36.657268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.131 [2024-11-20 11:35:36.661827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:51.131 [2024-11-20 11:35:36.662002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.131 [2024-11-20 11:35:36.662028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.131 [2024-11-20 11:35:36.666573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:51.131 [2024-11-20 11:35:36.666745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.131 [2024-11-20 11:35:36.666772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:51.131 [2024-11-20 11:35:36.671325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:51.131 [2024-11-20 11:35:36.671531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.131 [2024-11-20 11:35:36.671559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:51.131 [2024-11-20 11:35:36.676098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:51.131 [2024-11-20 11:35:36.676281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.131 [2024-11-20 11:35:36.676308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.131 [2024-11-20 11:35:36.680867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:51.131 [2024-11-20 11:35:36.681032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.131 [2024-11-20 11:35:36.681061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.131 [2024-11-20 11:35:36.685765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:51.131 [2024-11-20 11:35:36.685961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.132 [2024-11-20 11:35:36.685990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:51.132 [2024-11-20 11:35:36.690542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:51.132 [2024-11-20 11:35:36.690762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.132 [2024-11-20 11:35:36.690793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:51.132 [2024-11-20 11:35:36.695321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:51.132 [2024-11-20 11:35:36.695544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.132 [2024-11-20 11:35:36.695583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.132 [2024-11-20 11:35:36.700059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:51.132 [2024-11-20 11:35:36.700260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.132 [2024-11-20 11:35:36.700288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.132 [2024-11-20 11:35:36.704866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:51.132 [2024-11-20 11:35:36.705033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.132 [2024-11-20 11:35:36.705062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:51.132 [2024-11-20 11:35:36.709586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:51.132 [2024-11-20 11:35:36.709824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.132 [2024-11-20 11:35:36.709860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:51.132 [2024-11-20 11:35:36.714397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:51.132 [2024-11-20 11:35:36.714641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.132 [2024-11-20 11:35:36.714671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.391 [2024-11-20 11:35:36.719214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:51.391 [2024-11-20 11:35:36.719428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.391 [2024-11-20 11:35:36.719456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.391 [2024-11-20 11:35:36.724002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:51.391 [2024-11-20 11:35:36.724207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.391 [2024-11-20 11:35:36.724235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:51.391 [2024-11-20 11:35:36.728811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:51.391 [2024-11-20 11:35:36.729019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.391 [2024-11-20 11:35:36.729046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:51.391 [2024-11-20 11:35:36.733521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:51.391 [2024-11-20 11:35:36.733789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.391 [2024-11-20 11:35:36.733827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.391 [2024-11-20 11:35:36.738377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:51.391 [2024-11-20 11:35:36.738601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.391 [2024-11-20 11:35:36.738648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.391 [2024-11-20 11:35:36.743115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:51.391 [2024-11-20 11:35:36.743338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.391 [2024-11-20 11:35:36.743375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:51.391 [2024-11-20 11:35:36.747871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:51.391 [2024-11-20 11:35:36.748083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.391 [2024-11-20 11:35:36.748117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:51.391 [2024-11-20 11:35:36.752649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:51.391 [2024-11-20 11:35:36.752916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.391 [2024-11-20 11:35:36.752951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.391 [2024-11-20 11:35:36.757415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:51.391 [2024-11-20 11:35:36.757662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.391 [2024-11-20 11:35:36.757697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.391 [2024-11-20 11:35:36.762213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:51.391 [2024-11-20 11:35:36.762451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.391 [2024-11-20 11:35:36.762489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:51.391 [2024-11-20 11:35:36.767631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:51.391 [2024-11-20 11:35:36.767949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.391 [2024-11-20 11:35:36.767999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:51.391 [2024-11-20 11:35:36.772296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:51.391 [2024-11-20 11:35:36.772575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.391 [2024-11-20 11:35:36.772639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.391 [2024-11-20 11:35:36.777058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd3180) with pdu=0x2000166ff3c8 00:22:51.391 [2024-11-20 11:35:36.779465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.391 [2024-11-20 11:35:36.779544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.391 5533.00 IOPS, 691.62 MiB/s 00:22:51.391 Latency(us) 00:22:51.391 [2024-11-20T11:35:36.980Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:51.391 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:22:51.391 nvme0n1 : 2.01 5528.77 691.10 0.00 0.00 2887.53 1906.50 8638.84 00:22:51.391 [2024-11-20T11:35:36.980Z] =================================================================================================================== 00:22:51.392 [2024-11-20T11:35:36.981Z] Total : 5528.77 691.10 0.00 0.00 2887.53 1906.50 8638.84 00:22:51.392 { 00:22:51.392 "results": [ 00:22:51.392 { 00:22:51.392 "job": "nvme0n1", 00:22:51.392 "core_mask": "0x2", 00:22:51.392 "workload": "randwrite", 00:22:51.392 "status": "finished", 00:22:51.392 "queue_depth": 16, 00:22:51.392 "io_size": 131072, 00:22:51.392 "runtime": 2.005148, 00:22:51.392 "iops": 5528.7689487259795, 00:22:51.392 "mibps": 691.0961185907474, 00:22:51.392 "io_failed": 0, 00:22:51.392 "io_timeout": 0, 00:22:51.392 "avg_latency_us": 2887.5312023354604, 00:22:51.392 "min_latency_us": 1906.5018181818182, 00:22:51.392 "max_latency_us": 8638.836363636363 00:22:51.392 } 00:22:51.392 ], 00:22:51.392 "core_count": 1 00:22:51.392 } 00:22:51.392 11:35:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:51.392 11:35:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:51.392 11:35:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:51.392 | .driver_specific 00:22:51.392 | .nvme_error 00:22:51.392 | .status_code 00:22:51.392 | .command_transient_transport_error' 00:22:51.392 11:35:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:52.044 11:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 358 > 0 )) 00:22:52.044 11:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 94331 00:22:52.044 11:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 94331 ']' 00:22:52.045 11:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 94331 00:22:52.045 11:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:22:52.045 11:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:52.045 11:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94331 00:22:52.045 11:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:52.045 11:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:52.045 killing process with pid 94331 00:22:52.045 11:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94331' 00:22:52.045 Received shutdown signal, test time was about 2.000000 seconds 00:22:52.045 00:22:52.045 Latency(us) 00:22:52.045 [2024-11-20T11:35:37.634Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:52.045 [2024-11-20T11:35:37.634Z] =================================================================================================================== 00:22:52.045 [2024-11-20T11:35:37.634Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:52.045 11:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 94331 00:22:52.045 11:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 94331 00:22:52.045 11:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 94065 00:22:52.045 11:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 94065 ']' 00:22:52.045 11:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 94065 00:22:52.045 11:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:22:52.045 11:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:52.045 11:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94065 00:22:52.045 11:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:52.045 11:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:52.045 killing process with pid 94065 00:22:52.045 11:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94065' 00:22:52.045 11:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 94065 00:22:52.045 11:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 94065 00:22:52.304 00:22:52.304 real 0m16.570s 00:22:52.304 user 0m33.218s 00:22:52.304 sys 0m4.254s 00:22:52.304 11:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:52.304 11:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:52.304 ************************************ 00:22:52.304 END TEST nvmf_digest_error 00:22:52.304 ************************************ 00:22:52.304 11:35:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:22:52.304 11:35:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:22:52.304 11:35:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:52.304 11:35:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:22:52.304 11:35:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:52.304 11:35:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:22:52.304 11:35:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:52.304 11:35:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:52.304 rmmod nvme_tcp 00:22:52.304 rmmod nvme_fabrics 00:22:52.304 rmmod nvme_keyring 00:22:52.304 11:35:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:52.304 11:35:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:22:52.304 11:35:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:22:52.304 11:35:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 94065 ']' 00:22:52.304 11:35:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 94065 00:22:52.304 11:35:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 94065 ']' 00:22:52.304 11:35:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 94065 00:22:52.304 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (94065) - No such process 00:22:52.304 Process with pid 94065 is not found 00:22:52.304 11:35:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 94065 is not found' 00:22:52.304 11:35:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:52.304 11:35:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:52.304 11:35:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:52.304 11:35:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:22:52.304 11:35:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:52.304 11:35:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:22:52.304 11:35:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:22:52.304 11:35:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:52.304 11:35:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:52.304 11:35:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:52.304 11:35:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:52.304 11:35:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:52.304 11:35:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:52.304 11:35:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:52.304 11:35:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:52.304 11:35:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:52.304 11:35:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:52.304 11:35:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:52.563 11:35:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:52.563 11:35:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:52.563 11:35:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:52.563 11:35:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:52.563 11:35:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:52.563 11:35:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:52.563 11:35:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:52.563 11:35:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:52.563 11:35:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:22:52.563 00:22:52.563 real 0m33.435s 00:22:52.563 user 1m4.892s 00:22:52.563 sys 0m8.849s 00:22:52.563 11:35:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:52.563 11:35:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:22:52.563 ************************************ 00:22:52.563 END TEST nvmf_digest 00:22:52.563 ************************************ 00:22:52.563 11:35:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 1 -eq 1 ]] 00:22:52.563 11:35:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ tcp == \t\c\p ]] 00:22:52.563 11:35:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@38 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:22:52.563 11:35:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:52.563 11:35:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:52.563 11:35:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:52.563 ************************************ 00:22:52.563 START TEST nvmf_mdns_discovery 00:22:52.563 ************************************ 00:22:52.563 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:22:52.563 * Looking for test storage... 00:22:52.563 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:52.563 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:52.563 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:22:52.563 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:52.823 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:52.823 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:52.823 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:52.823 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:52.823 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:22:52.823 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:22:52.823 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:22:52.823 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:22:52.823 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:22:52.823 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:22:52.823 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:22:52.823 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:52.823 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@344 -- # case "$op" in 00:22:52.823 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@345 -- # : 1 00:22:52.823 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:52.823 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:52.823 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@365 -- # decimal 1 00:22:52.823 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@353 -- # local d=1 00:22:52.823 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:52.823 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@355 -- # echo 1 00:22:52.823 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:22:52.823 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@366 -- # decimal 2 00:22:52.823 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@353 -- # local d=2 00:22:52.823 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:52.823 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@355 -- # echo 2 00:22:52.823 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:22:52.823 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:52.823 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:52.823 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@368 -- # return 0 00:22:52.823 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:52.823 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:52.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:52.823 --rc genhtml_branch_coverage=1 00:22:52.823 --rc genhtml_function_coverage=1 00:22:52.823 --rc genhtml_legend=1 00:22:52.823 --rc geninfo_all_blocks=1 00:22:52.823 --rc geninfo_unexecuted_blocks=1 00:22:52.823 00:22:52.823 ' 00:22:52.823 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:52.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:52.824 --rc genhtml_branch_coverage=1 00:22:52.824 --rc genhtml_function_coverage=1 00:22:52.824 --rc genhtml_legend=1 00:22:52.824 --rc geninfo_all_blocks=1 00:22:52.824 --rc geninfo_unexecuted_blocks=1 00:22:52.824 00:22:52.824 ' 00:22:52.824 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:52.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:52.824 --rc genhtml_branch_coverage=1 00:22:52.824 --rc genhtml_function_coverage=1 00:22:52.824 --rc genhtml_legend=1 00:22:52.824 --rc geninfo_all_blocks=1 00:22:52.824 --rc geninfo_unexecuted_blocks=1 00:22:52.824 00:22:52.824 ' 00:22:52.824 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:52.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:52.824 --rc genhtml_branch_coverage=1 00:22:52.824 --rc genhtml_function_coverage=1 00:22:52.824 --rc genhtml_legend=1 00:22:52.824 --rc geninfo_all_blocks=1 00:22:52.824 --rc geninfo_unexecuted_blocks=1 00:22:52.824 00:22:52.824 ' 00:22:52.824 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:52.824 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # uname -s 00:22:52.824 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:52.824 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:52.824 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:52.824 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:52.824 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:52.824 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:52.824 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:52.824 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:52.824 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:52.824 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:52.824 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:22:52.824 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=806d442c-08ba-4719-b76d-33169283459a 00:22:52.824 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:52.824 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:52.824 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:52.824 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:52.824 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:52.824 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:22:52.824 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:52.824 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:52.824 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:52.824 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.824 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.824 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.824 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@5 -- # export PATH 00:22:52.824 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.824 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@51 -- # : 0 00:22:52.824 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:52.824 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:52.824 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:52.824 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:52.824 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:52.824 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:52.824 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:52.824 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:52.824 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:52.824 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:52.824 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@13 -- # DISCOVERY_FILTER=address 00:22:52.824 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@14 -- # DISCOVERY_PORT=8009 00:22:52.824 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:22:52.824 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@18 -- # NQN=nqn.2016-06.io.spdk:cnode 00:22:52.824 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@19 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:22:52.824 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@21 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:22:52.824 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@22 -- # HOST_SOCK=/tmp/host.sock 00:22:52.824 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@24 -- # nvmftestinit 00:22:52.824 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:52.824 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:52.824 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:52.824 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:52.824 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:52.824 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:52.824 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:52.824 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:52.824 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:22:52.824 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:22:52.824 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:22:52.825 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:22:52.825 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:22:52.825 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:22:52.825 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:52.825 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:52.825 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:52.825 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:52.825 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:52.825 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:52.825 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:52.825 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:52.825 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:52.825 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:52.825 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:52.825 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:52.825 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:52.825 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:52.825 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:52.825 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:52.825 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:52.825 Cannot find device "nvmf_init_br" 00:22:52.825 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # true 00:22:52.825 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:52.825 Cannot find device "nvmf_init_br2" 00:22:52.825 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # true 00:22:52.825 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:52.825 Cannot find device "nvmf_tgt_br" 00:22:52.825 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@164 -- # true 00:22:52.825 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:52.825 Cannot find device "nvmf_tgt_br2" 00:22:52.825 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@165 -- # true 00:22:52.825 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:52.825 Cannot find device "nvmf_init_br" 00:22:52.825 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@166 -- # true 00:22:52.825 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:52.825 Cannot find device "nvmf_init_br2" 00:22:52.825 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@167 -- # true 00:22:52.825 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:52.825 Cannot find device "nvmf_tgt_br" 00:22:52.825 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@168 -- # true 00:22:52.825 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:52.825 Cannot find device "nvmf_tgt_br2" 00:22:52.825 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@169 -- # true 00:22:52.825 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:52.825 Cannot find device "nvmf_br" 00:22:52.825 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@170 -- # true 00:22:52.825 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:52.825 Cannot find device "nvmf_init_if" 00:22:52.825 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@171 -- # true 00:22:52.825 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:52.825 Cannot find device "nvmf_init_if2" 00:22:52.825 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@172 -- # true 00:22:52.825 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:52.825 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:52.825 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@173 -- # true 00:22:52.825 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:52.825 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:52.825 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@174 -- # true 00:22:52.825 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:52.825 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:52.825 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:52.825 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:52.825 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:53.085 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:53.085 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:53.085 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:53.085 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:53.085 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:53.085 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:53.085 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:53.085 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:53.085 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:53.085 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:53.085 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:53.086 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:53.086 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:53.086 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:53.086 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:53.086 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:53.086 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:53.086 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:53.086 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:53.086 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:53.086 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:53.086 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:53.086 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:53.086 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:53.086 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:53.086 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:53.086 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:53.086 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:53.086 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:53.086 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.104 ms 00:22:53.086 00:22:53.086 --- 10.0.0.3 ping statistics --- 00:22:53.086 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:53.086 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:22:53.086 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:53.086 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:53.086 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:22:53.086 00:22:53.086 --- 10.0.0.4 ping statistics --- 00:22:53.086 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:53.086 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:22:53.086 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:53.086 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:53.086 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:22:53.086 00:22:53.086 --- 10.0.0.1 ping statistics --- 00:22:53.086 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:53.086 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:22:53.086 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:53.086 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:53.086 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:22:53.086 00:22:53.086 --- 10.0.0.2 ping statistics --- 00:22:53.086 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:53.086 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:22:53.086 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:53.086 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@461 -- # return 0 00:22:53.086 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:53.086 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:53.086 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:53.086 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:53.086 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:53.086 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:53.086 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:53.086 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@29 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:53.086 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:53.086 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:53.086 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:53.086 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@509 -- # nvmfpid=94666 00:22:53.086 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:53.086 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@510 -- # waitforlisten 94666 00:22:53.086 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@835 -- # '[' -z 94666 ']' 00:22:53.086 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:53.086 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:53.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:53.086 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:53.086 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:53.086 11:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:53.345 [2024-11-20 11:35:38.728561] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:22:53.345 [2024-11-20 11:35:38.728710] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:53.345 [2024-11-20 11:35:38.879135] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:53.345 [2024-11-20 11:35:38.924869] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:53.345 [2024-11-20 11:35:38.924945] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:53.345 [2024-11-20 11:35:38.924964] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:53.345 [2024-11-20 11:35:38.924979] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:53.345 [2024-11-20 11:35:38.924991] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:53.345 [2024-11-20 11:35:38.925388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:54.281 11:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:54.281 11:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@868 -- # return 0 00:22:54.281 11:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:54.281 11:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:54.281 11:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:54.281 11:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:54.281 11:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@31 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:22:54.281 11:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.281 11:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:54.281 11:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.281 11:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@32 -- # rpc_cmd framework_start_init 00:22:54.281 11:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.281 11:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:54.539 11:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.539 11:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:54.539 11:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.539 11:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:54.539 [2024-11-20 11:35:39.919006] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:54.539 11:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.539 11:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:22:54.539 11:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.539 11:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:54.539 [2024-11-20 11:35:39.927178] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:22:54.539 11:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.539 11:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null0 1000 512 00:22:54.539 11:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.539 11:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:54.539 null0 00:22:54.539 11:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.539 11:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null1 1000 512 00:22:54.539 11:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.539 11:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:54.539 null1 00:22:54.539 11:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.539 11:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null2 1000 512 00:22:54.539 11:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.539 11:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:54.539 null2 00:22:54.539 11:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.539 11:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_null_create null3 1000 512 00:22:54.539 11:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.539 11:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:54.539 null3 00:22:54.539 11:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.539 11:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@40 -- # rpc_cmd bdev_wait_for_examine 00:22:54.539 11:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.539 11:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:54.539 11:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.539 11:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@48 -- # hostpid=94722 00:22:54.539 11:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@49 -- # waitforlisten 94722 /tmp/host.sock 00:22:54.539 11:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:22:54.539 11:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@835 -- # '[' -z 94722 ']' 00:22:54.539 11:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:22:54.539 11:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:54.539 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:54.539 11:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:54.539 11:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:54.539 11:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:54.539 [2024-11-20 11:35:40.029602] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:22:54.539 [2024-11-20 11:35:40.029708] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94722 ] 00:22:54.798 [2024-11-20 11:35:40.176862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:54.798 [2024-11-20 11:35:40.227459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:55.056 11:35:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:55.056 11:35:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@868 -- # return 0 00:22:55.056 11:35:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:22:55.056 11:35:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@52 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahipid;' EXIT 00:22:55.056 11:35:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@56 -- # avahi-daemon --kill 00:22:55.056 11:35:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@58 -- # avahipid=94738 00:22:55.056 11:35:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@59 -- # sleep 1 00:22:55.056 11:35:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:22:55.056 11:35:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:22:55.056 Process 1059 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:22:55.056 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:22:55.056 Successfully dropped root privileges. 00:22:55.056 avahi-daemon 0.8 starting up. 00:22:55.056 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:22:55.056 Successfully called chroot(). 00:22:55.056 Successfully dropped remaining capabilities. 00:22:55.056 No service file found in /etc/avahi/services. 00:22:55.056 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.4. 00:22:55.056 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:22:55.056 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.3. 00:22:55.056 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:22:55.056 Network interface enumeration completed. 00:22:55.056 Registering new address record for fe80::6084:d4ff:fe9b:2260 on nvmf_tgt_if2.*. 00:22:55.056 Registering new address record for 10.0.0.4 on nvmf_tgt_if2.IPv4. 00:22:55.056 Registering new address record for fe80::5424:7dff:feb6:5806 on nvmf_tgt_if.*. 00:22:55.056 Registering new address record for 10.0.0.3 on nvmf_tgt_if.IPv4. 00:22:55.990 Server startup complete. Host name is fedora39-cloud-1721788873-2326.local. Local service cookie is 2781067550. 00:22:55.990 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:22:55.990 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.990 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:55.990 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.990 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@62 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:22:55.990 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.990 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:55.990 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.990 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@114 -- # notify_id=0 00:22:55.990 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@120 -- # get_subsystem_names 00:22:55.990 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:55.990 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:22:55.990 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.990 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:22:55.990 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:55.990 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:22:55.990 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.990 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@120 -- # [[ '' == '' ]] 00:22:55.990 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@121 -- # get_bdev_list 00:22:56.247 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:56.247 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:22:56.247 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:22:56.247 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:22:56.247 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.247 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:56.247 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.247 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@121 -- # [[ '' == '' ]] 00:22:56.247 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@123 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:22:56.247 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.247 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:56.247 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.247 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@124 -- # get_subsystem_names 00:22:56.247 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:56.247 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:22:56.247 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.247 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:22:56.247 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:56.247 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:22:56.247 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.247 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@124 -- # [[ '' == '' ]] 00:22:56.247 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # get_bdev_list 00:22:56.247 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:56.247 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.247 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:56.247 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:22:56.247 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:22:56.247 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:22:56.247 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.247 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # [[ '' == '' ]] 00:22:56.247 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:22:56.247 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.247 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:56.247 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.247 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # get_subsystem_names 00:22:56.247 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:56.247 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:22:56.247 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.247 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:56.247 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:22:56.247 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:22:56.247 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.247 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # [[ '' == '' ]] 00:22:56.247 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # get_bdev_list 00:22:56.248 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:56.248 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.248 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:22:56.248 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:56.248 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:22:56.248 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:22:56.248 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.248 [2024-11-20 11:35:41.828342] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:22:56.506 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # [[ '' == '' ]] 00:22:56.506 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@133 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:22:56.506 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.506 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:56.506 [2024-11-20 11:35:41.867665] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:56.506 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.506 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:22:56.506 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.506 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:56.506 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.506 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@140 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:22:56.506 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.506 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:56.506 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.506 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:22:56.506 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.506 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:56.506 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.506 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@145 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:22:56.506 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.506 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:56.506 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.506 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_publish_mdns_prr 00:22:56.506 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.506 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:56.506 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.506 11:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@149 -- # sleep 5 00:22:57.440 [2024-11-20 11:35:42.728329] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:22:57.699 [2024-11-20 11:35:43.128375] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:22:57.699 [2024-11-20 11:35:43.128425] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:22:57.699 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:22:57.699 cookie is 0 00:22:57.699 is_local: 1 00:22:57.699 our_own: 0 00:22:57.699 wide_area: 0 00:22:57.699 multicast: 1 00:22:57.699 cached: 1 00:22:57.699 [2024-11-20 11:35:43.228341] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:22:57.699 [2024-11-20 11:35:43.228384] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:22:57.699 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:22:57.699 cookie is 0 00:22:57.699 is_local: 1 00:22:57.699 our_own: 0 00:22:57.699 wide_area: 0 00:22:57.699 multicast: 1 00:22:57.699 cached: 1 00:22:58.635 [2024-11-20 11:35:44.129175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.635 [2024-11-20 11:35:44.129243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc2710 with addr=10.0.0.4, port=8009 00:22:58.635 [2024-11-20 11:35:44.129280] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:58.635 [2024-11-20 11:35:44.129294] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:58.635 [2024-11-20 11:35:44.129304] bdev_nvme.c:7547:discovery_poller: *ERROR*: Discovery[10.0.0.4:8009] could not start discovery connect 00:22:58.894 [2024-11-20 11:35:44.233784] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:22:58.894 [2024-11-20 11:35:44.233828] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:22:58.894 [2024-11-20 11:35:44.233848] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:58.894 [2024-11-20 11:35:44.319940] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem mdns1_nvme0 00:22:58.894 [2024-11-20 11:35:44.374448] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:22:58.894 [2024-11-20 11:35:44.375303] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xff7e70:1 started. 00:22:58.894 [2024-11-20 11:35:44.377027] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns1_nvme0 done 00:22:58.894 [2024-11-20 11:35:44.377056] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:22:58.894 [2024-11-20 11:35:44.382229] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xff7e70 was disconnected and freed. delete nvme_qpair. 00:22:59.830 [2024-11-20 11:35:45.129126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.830 [2024-11-20 11:35:45.129196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1130b30 with addr=10.0.0.4, port=8009 00:22:59.830 [2024-11-20 11:35:45.129218] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:59.830 [2024-11-20 11:35:45.129229] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:59.830 [2024-11-20 11:35:45.129238] bdev_nvme.c:7547:discovery_poller: *ERROR*: Discovery[10.0.0.4:8009] could not start discovery connect 00:23:00.765 [2024-11-20 11:35:46.129114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:00.765 [2024-11-20 11:35:46.129186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc20d0 with addr=10.0.0.4, port=8009 00:23:00.765 [2024-11-20 11:35:46.129209] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:00.765 [2024-11-20 11:35:46.129219] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:00.765 [2024-11-20 11:35:46.129229] bdev_nvme.c:7547:discovery_poller: *ERROR*: Discovery[10.0.0.4:8009] could not start discovery connect 00:23:01.332 11:35:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # check_mdns_request_exists spdk1 10.0.0.4 8009 'not found' 00:23:01.332 11:35:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:23:01.332 11:35:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.4 00:23:01.332 11:35:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:23:01.332 11:35:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local 'check_type=not found' 00:23:01.332 11:35:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:23:01.332 11:35:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:23:01.591 11:35:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:23:01.591 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:23:01.591 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:23:01.591 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:23:01.591 11:35:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:23:01.591 11:35:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:23:01.591 11:35:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:23:01.591 11:35:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:23:01.591 11:35:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:23:01.591 11:35:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:23:01.591 11:35:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:23:01.591 11:35:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:23:01.591 11:35:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:23:01.592 11:35:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@105 -- # [[ not found == \f\o\u\n\d ]] 00:23:01.592 11:35:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@108 -- # return 0 00:23:01.592 11:35:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.4 -s 8009 00:23:01.592 11:35:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.592 11:35:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:01.592 [2024-11-20 11:35:46.945642] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 8009 *** 00:23:01.592 [2024-11-20 11:35:46.948134] bdev_nvme.c:7461:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:01.592 [2024-11-20 11:35:46.948174] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:01.592 11:35:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.592 11:35:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@156 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.4 -s 4420 00:23:01.592 11:35:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.592 11:35:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:01.592 [2024-11-20 11:35:46.953572] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:23:01.592 [2024-11-20 11:35:46.954122] bdev_nvme.c:7461:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:01.592 11:35:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.592 11:35:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@157 -- # sleep 1 00:23:01.592 [2024-11-20 11:35:47.085259] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:23:01.592 [2024-11-20 11:35:47.085326] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:01.592 [2024-11-20 11:35:47.134057] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr attached 00:23:01.592 [2024-11-20 11:35:47.134094] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr connected 00:23:01.592 [2024-11-20 11:35:47.134112] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:23:01.592 [2024-11-20 11:35:47.175209] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:23:01.851 [2024-11-20 11:35:47.220184] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 new subsystem mdns0_nvme0 00:23:01.851 [2024-11-20 11:35:47.274657] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr was created to 10.0.0.4:4420 00:23:01.851 [2024-11-20 11:35:47.275335] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Connecting qpair 0xff4d90:1 started. 00:23:01.851 [2024-11-20 11:35:47.276756] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.4:8009] attach mdns0_nvme0 done 00:23:01.851 [2024-11-20 11:35:47.276784] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 found again 00:23:01.851 [2024-11-20 11:35:47.282802] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpair 0xff4d90 was disconnected and freed. delete nvme_qpair. 00:23:02.419 11:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@160 -- # check_mdns_request_exists spdk1 10.0.0.4 8009 found 00:23:02.419 11:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:23:02.419 11:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.4 00:23:02.419 11:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:23:02.419 11:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local check_type=found 00:23:02.419 11:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:23:02.419 11:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:23:02.419 11:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:23:02.419 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:23:02.419 +;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:23:02.419 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:23:02.419 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:23:02.419 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:23:02.419 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:23:02.419 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:23:02.419 11:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:23:02.419 11:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:23:02.419 11:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:23:02.419 11:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\4* ]] 00:23:02.419 11:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:23:02.419 11:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:23:02.419 11:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:23:02.419 11:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:23:02.419 11:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\4* ]] 00:23:02.419 11:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:23:02.419 11:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:23:02.419 11:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:23:02.419 11:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:23:02.419 11:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\1\0\.\0\.\0\.\4* ]] 00:23:02.419 11:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\8\0\0\9* ]] 00:23:02.419 11:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # [[ found == \f\o\u\n\d ]] 00:23:02.419 11:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@98 -- # return 0 00:23:02.419 11:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@162 -- # get_mdns_discovery_svcs 00:23:02.419 11:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:23:02.419 11:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:23:02.419 11:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:23:02.419 11:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.419 11:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:02.419 11:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:23:02.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@162 -- # [[ mdns == \m\d\n\s ]] 00:23:02.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@163 -- # get_discovery_ctrlrs 00:23:02.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:23:02.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:02.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:23:02.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:23:02.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:02.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@163 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:23:02.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:23:02.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:02.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:23:02.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:23:02.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:23:02.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:02.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.678 [2024-11-20 11:35:48.128357] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:23:02.678 [2024-11-20 11:35:48.128387] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:23:02.678 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:23:02.678 cookie is 0 00:23:02.678 is_local: 1 00:23:02.678 our_own: 0 00:23:02.678 wide_area: 0 00:23:02.679 multicast: 1 00:23:02.679 cached: 1 00:23:02.679 [2024-11-20 11:35:48.128402] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:23:02.679 11:35:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:23:02.679 11:35:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:23:02.679 11:35:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:02.679 11:35:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.679 11:35:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:02.679 11:35:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:23:02.679 11:35:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:23:02.679 11:35:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:23:02.679 11:35:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.679 11:35:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:23:02.679 11:35:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:23:02.679 11:35:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:02.679 11:35:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:23:02.679 11:35:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:23:02.679 11:35:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:23:02.679 11:35:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.679 11:35:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:02.679 11:35:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.937 11:35:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # [[ 4420 == \4\4\2\0 ]] 00:23:02.937 11:35:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:23:02.937 11:35:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:23:02.937 11:35:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.937 11:35:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:02.937 11:35:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:02.937 11:35:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:23:02.937 11:35:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:23:02.937 11:35:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.937 11:35:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # [[ 4420 == \4\4\2\0 ]] 00:23:02.937 11:35:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@168 -- # get_notification_count 00:23:02.937 11:35:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:02.937 11:35:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:23:02.937 11:35:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.937 11:35:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:02.937 11:35:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.937 11:35:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=2 00:23:02.937 11:35:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=2 00:23:02.937 11:35:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@169 -- # [[ 2 == 2 ]] 00:23:02.937 11:35:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@172 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:02.937 11:35:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.937 11:35:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:02.937 11:35:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.937 11:35:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@173 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:23:02.937 11:35:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.937 11:35:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:02.937 [2024-11-20 11:35:48.408681] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xfe1ce0:1 started. 00:23:02.937 11:35:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.937 11:35:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # sleep 1 00:23:02.937 [2024-11-20 11:35:48.412996] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xfe1ce0 was disconnected and freed. delete nvme_qpair. 00:23:02.937 [2024-11-20 11:35:48.416040] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Connecting qpair 0xff3810:1 started. 00:23:02.937 [2024-11-20 11:35:48.422904] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpair 0xff3810 was disconnected and freed. delete nvme_qpair. 00:23:02.937 [2024-11-20 11:35:48.428362] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:23:02.937 [2024-11-20 11:35:48.428386] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:23:02.937 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:23:02.937 cookie is 0 00:23:02.937 is_local: 1 00:23:02.937 our_own: 0 00:23:02.937 wide_area: 0 00:23:02.937 multicast: 1 00:23:02.937 cached: 1 00:23:02.937 [2024-11-20 11:35:48.428399] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.4 trid->trsvcid: 8009 00:23:03.912 11:35:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:23:03.913 11:35:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:03.913 11:35:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:23:03.913 11:35:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:23:03.913 11:35:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:23:03.913 11:35:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.913 11:35:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:03.913 11:35:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.913 11:35:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:03.913 11:35:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@177 -- # get_notification_count 00:23:03.913 11:35:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:23:03.913 11:35:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:03.913 11:35:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.913 11:35:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:03.913 11:35:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.171 11:35:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=2 00:23:04.171 11:35:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=4 00:23:04.171 11:35:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@178 -- # [[ 2 == 2 ]] 00:23:04.171 11:35:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@182 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:23:04.171 11:35:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.171 11:35:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:04.171 [2024-11-20 11:35:49.546999] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:23:04.171 [2024-11-20 11:35:49.547286] bdev_nvme.c:7461:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:04.171 [2024-11-20 11:35:49.547324] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:04.171 [2024-11-20 11:35:49.547363] bdev_nvme.c:7461:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:23:04.171 [2024-11-20 11:35:49.547379] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:23:04.171 11:35:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.171 11:35:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@183 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.4 -s 4421 00:23:04.171 11:35:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.171 11:35:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:04.171 [2024-11-20 11:35:49.554930] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4421 *** 00:23:04.171 [2024-11-20 11:35:49.555278] bdev_nvme.c:7461:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:04.171 [2024-11-20 11:35:49.555335] bdev_nvme.c:7461:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:23:04.171 11:35:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.171 11:35:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@184 -- # sleep 1 00:23:04.171 [2024-11-20 11:35:49.686410] bdev_nvme.c:7403:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for mdns1_nvme0 00:23:04.171 [2024-11-20 11:35:49.686858] bdev_nvme.c:7403:discovery_log_page_cb: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 new path for mdns0_nvme0 00:23:04.171 [2024-11-20 11:35:49.747899] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:23:04.171 [2024-11-20 11:35:49.747985] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns1_nvme0 done 00:23:04.171 [2024-11-20 11:35:49.747999] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:23:04.171 [2024-11-20 11:35:49.748005] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:23:04.171 [2024-11-20 11:35:49.748027] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:04.171 [2024-11-20 11:35:49.748176] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 2] ctrlr was created to 10.0.0.4:4421 00:23:04.171 [2024-11-20 11:35:49.748206] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.4:8009] attach mdns0_nvme0 done 00:23:04.171 [2024-11-20 11:35:49.748215] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 found again 00:23:04.171 [2024-11-20 11:35:49.748222] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:23:04.171 [2024-11-20 11:35:49.748237] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:23:04.431 [2024-11-20 11:35:49.793517] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:23:04.431 [2024-11-20 11:35:49.793553] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:23:04.431 [2024-11-20 11:35:49.793598] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 found again 00:23:04.431 [2024-11-20 11:35:49.793608] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:23:04.998 11:35:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # get_subsystem_names 00:23:04.998 11:35:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:04.998 11:35:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:23:04.998 11:35:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:23:04.998 11:35:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.998 11:35:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:04.998 11:35:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:23:05.256 11:35:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.256 11:35:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:23:05.256 11:35:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:23:05.256 11:35:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:23:05.256 11:35:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:05.256 11:35:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:23:05.256 11:35:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.256 11:35:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:05.256 11:35:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:23:05.257 11:35:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.257 11:35:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:05.257 11:35:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@188 -- # get_subsystem_paths mdns0_nvme0 00:23:05.257 11:35:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:23:05.257 11:35:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.257 11:35:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:05.257 11:35:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:05.257 11:35:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:23:05.257 11:35:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:23:05.257 11:35:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.257 11:35:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@188 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:05.257 11:35:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@189 -- # get_subsystem_paths mdns1_nvme0 00:23:05.257 11:35:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:23:05.257 11:35:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.257 11:35:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:05.257 11:35:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:23:05.257 11:35:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:05.257 11:35:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:23:05.257 11:35:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.257 11:35:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@189 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:05.257 11:35:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@190 -- # get_notification_count 00:23:05.257 11:35:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:23:05.257 11:35:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.257 11:35:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:23:05.257 11:35:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:05.257 11:35:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.518 11:35:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=0 00:23:05.518 11:35:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=4 00:23:05.518 11:35:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # [[ 0 == 0 ]] 00:23:05.518 11:35:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@195 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:23:05.518 11:35:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.518 11:35:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:05.518 [2024-11-20 11:35:50.860296] bdev_nvme.c:7461:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:05.518 [2024-11-20 11:35:50.860339] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:05.518 [2024-11-20 11:35:50.860379] bdev_nvme.c:7461:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:23:05.518 [2024-11-20 11:35:50.860395] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:23:05.518 11:35:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.518 11:35:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@196 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.4 -s 4420 00:23:05.518 11:35:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.518 11:35:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:05.518 [2024-11-20 11:35:50.868396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.518 [2024-11-20 11:35:50.868434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.518 [2024-11-20 11:35:50.868449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.518 [2024-11-20 11:35:50.868458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.518 [2024-11-20 11:35:50.868469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.518 [2024-11-20 11:35:50.868478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.518 [2024-11-20 11:35:50.868488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.518 [2024-11-20 11:35:50.868497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.518 [2024-11-20 11:35:50.868507] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd47f0 is same with the state(6) to be set 00:23:05.519 [2024-11-20 11:35:50.872289] bdev_nvme.c:7461:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:05.519 [2024-11-20 11:35:50.872349] bdev_nvme.c:7461:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:23:05.519 [2024-11-20 11:35:50.875394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.519 [2024-11-20 11:35:50.875427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.519 [2024-11-20 11:35:50.875450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.519 [2024-11-20 11:35:50.875459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.519 [2024-11-20 11:35:50.875469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.519 [2024-11-20 11:35:50.875478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.519 [2024-11-20 11:35:50.875488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.519 [2024-11-20 11:35:50.875497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.519 [2024-11-20 11:35:50.875506] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfffdd0 is same with the state(6) to be set 00:23:05.519 11:35:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.519 11:35:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@197 -- # sleep 1 00:23:05.519 [2024-11-20 11:35:50.878353] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd47f0 (9): Bad file descriptor 00:23:05.519 [2024-11-20 11:35:50.885359] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfffdd0 (9): Bad file descriptor 00:23:05.519 [2024-11-20 11:35:50.888379] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:05.519 [2024-11-20 11:35:50.888405] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:05.519 [2024-11-20 11:35:50.888416] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:05.519 [2024-11-20 11:35:50.888423] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:05.519 [2024-11-20 11:35:50.888457] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:05.519 [2024-11-20 11:35:50.888542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.519 [2024-11-20 11:35:50.888565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd47f0 with addr=10.0.0.3, port=4420 00:23:05.519 [2024-11-20 11:35:50.888577] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd47f0 is same with the state(6) to be set 00:23:05.519 [2024-11-20 11:35:50.888595] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd47f0 (9): Bad file descriptor 00:23:05.519 [2024-11-20 11:35:50.888637] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:05.519 [2024-11-20 11:35:50.888651] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:05.519 [2024-11-20 11:35:50.888662] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:05.519 [2024-11-20 11:35:50.888672] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:05.519 [2024-11-20 11:35:50.888678] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:05.519 [2024-11-20 11:35:50.888684] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:05.519 [2024-11-20 11:35:50.895367] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:23:05.519 [2024-11-20 11:35:50.895394] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:23:05.519 [2024-11-20 11:35:50.895401] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:23:05.519 [2024-11-20 11:35:50.895406] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:23:05.519 [2024-11-20 11:35:50.895432] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:23:05.519 [2024-11-20 11:35:50.895492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.519 [2024-11-20 11:35:50.895512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfffdd0 with addr=10.0.0.4, port=4420 00:23:05.519 [2024-11-20 11:35:50.895523] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfffdd0 is same with the state(6) to be set 00:23:05.519 [2024-11-20 11:35:50.895540] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfffdd0 (9): Bad file descriptor 00:23:05.519 [2024-11-20 11:35:50.895567] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:23:05.519 [2024-11-20 11:35:50.895578] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:23:05.519 [2024-11-20 11:35:50.895588] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:23:05.519 [2024-11-20 11:35:50.895596] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:23:05.519 [2024-11-20 11:35:50.895602] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:23:05.519 [2024-11-20 11:35:50.895607] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:23:05.519 [2024-11-20 11:35:50.898465] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:05.519 [2024-11-20 11:35:50.898489] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:05.519 [2024-11-20 11:35:50.898496] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:05.519 [2024-11-20 11:35:50.898501] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:05.519 [2024-11-20 11:35:50.898526] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:05.519 [2024-11-20 11:35:50.898579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.519 [2024-11-20 11:35:50.898599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd47f0 with addr=10.0.0.3, port=4420 00:23:05.519 [2024-11-20 11:35:50.898610] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd47f0 is same with the state(6) to be set 00:23:05.519 [2024-11-20 11:35:50.898640] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd47f0 (9): Bad file descriptor 00:23:05.519 [2024-11-20 11:35:50.898668] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:05.519 [2024-11-20 11:35:50.898679] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:05.519 [2024-11-20 11:35:50.898689] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:05.519 [2024-11-20 11:35:50.898697] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:05.519 [2024-11-20 11:35:50.898703] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:05.519 [2024-11-20 11:35:50.898708] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:05.519 [2024-11-20 11:35:50.905443] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:23:05.519 [2024-11-20 11:35:50.905468] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:23:05.519 [2024-11-20 11:35:50.905474] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:23:05.519 [2024-11-20 11:35:50.905480] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:23:05.519 [2024-11-20 11:35:50.905504] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:23:05.519 [2024-11-20 11:35:50.905557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.519 [2024-11-20 11:35:50.905576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfffdd0 with addr=10.0.0.4, port=4420 00:23:05.519 [2024-11-20 11:35:50.905587] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfffdd0 is same with the state(6) to be set 00:23:05.519 [2024-11-20 11:35:50.905602] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfffdd0 (9): Bad file descriptor 00:23:05.519 [2024-11-20 11:35:50.905630] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:23:05.519 [2024-11-20 11:35:50.905642] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:23:05.519 [2024-11-20 11:35:50.905651] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:23:05.519 [2024-11-20 11:35:50.905660] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:23:05.519 [2024-11-20 11:35:50.905665] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:23:05.519 [2024-11-20 11:35:50.905670] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:23:05.519 [2024-11-20 11:35:50.908537] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:05.519 [2024-11-20 11:35:50.908560] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:05.519 [2024-11-20 11:35:50.908567] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:05.519 [2024-11-20 11:35:50.908574] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:05.519 [2024-11-20 11:35:50.908598] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:05.519 [2024-11-20 11:35:50.908659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.519 [2024-11-20 11:35:50.908680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd47f0 with addr=10.0.0.3, port=4420 00:23:05.519 [2024-11-20 11:35:50.908690] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd47f0 is same with the state(6) to be set 00:23:05.519 [2024-11-20 11:35:50.908707] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd47f0 (9): Bad file descriptor 00:23:05.519 [2024-11-20 11:35:50.908732] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:05.519 [2024-11-20 11:35:50.908743] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:05.519 [2024-11-20 11:35:50.908753] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:05.519 [2024-11-20 11:35:50.908761] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:05.519 [2024-11-20 11:35:50.908767] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:05.520 [2024-11-20 11:35:50.908772] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:05.520 [2024-11-20 11:35:50.915514] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:23:05.520 [2024-11-20 11:35:50.915539] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:23:05.520 [2024-11-20 11:35:50.915545] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:23:05.520 [2024-11-20 11:35:50.915551] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:23:05.520 [2024-11-20 11:35:50.915574] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:23:05.520 [2024-11-20 11:35:50.915638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.520 [2024-11-20 11:35:50.915658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfffdd0 with addr=10.0.0.4, port=4420 00:23:05.520 [2024-11-20 11:35:50.915668] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfffdd0 is same with the state(6) to be set 00:23:05.520 [2024-11-20 11:35:50.915694] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfffdd0 (9): Bad file descriptor 00:23:05.520 [2024-11-20 11:35:50.915710] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:23:05.520 [2024-11-20 11:35:50.915720] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:23:05.520 [2024-11-20 11:35:50.915729] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:23:05.520 [2024-11-20 11:35:50.915737] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:23:05.520 [2024-11-20 11:35:50.915743] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:23:05.520 [2024-11-20 11:35:50.915748] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:23:05.520 [2024-11-20 11:35:50.918608] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:05.520 [2024-11-20 11:35:50.918645] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:05.520 [2024-11-20 11:35:50.918652] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:05.520 [2024-11-20 11:35:50.918658] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:05.520 [2024-11-20 11:35:50.918683] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:05.520 [2024-11-20 11:35:50.918737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.520 [2024-11-20 11:35:50.918756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd47f0 with addr=10.0.0.3, port=4420 00:23:05.520 [2024-11-20 11:35:50.918767] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd47f0 is same with the state(6) to be set 00:23:05.520 [2024-11-20 11:35:50.918783] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd47f0 (9): Bad file descriptor 00:23:05.520 [2024-11-20 11:35:50.918808] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:05.520 [2024-11-20 11:35:50.918820] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:05.520 [2024-11-20 11:35:50.918830] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:05.520 [2024-11-20 11:35:50.918838] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:05.520 [2024-11-20 11:35:50.918843] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:05.520 [2024-11-20 11:35:50.918848] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:05.520 [2024-11-20 11:35:50.925586] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:23:05.520 [2024-11-20 11:35:50.925612] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:23:05.520 [2024-11-20 11:35:50.925628] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:23:05.520 [2024-11-20 11:35:50.925634] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:23:05.520 [2024-11-20 11:35:50.925658] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:23:05.520 [2024-11-20 11:35:50.925710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.520 [2024-11-20 11:35:50.925730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfffdd0 with addr=10.0.0.4, port=4420 00:23:05.520 [2024-11-20 11:35:50.925740] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfffdd0 is same with the state(6) to be set 00:23:05.520 [2024-11-20 11:35:50.925766] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfffdd0 (9): Bad file descriptor 00:23:05.520 [2024-11-20 11:35:50.925783] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:23:05.520 [2024-11-20 11:35:50.925797] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:23:05.520 [2024-11-20 11:35:50.925807] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:23:05.520 [2024-11-20 11:35:50.925815] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:23:05.520 [2024-11-20 11:35:50.925820] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:23:05.520 [2024-11-20 11:35:50.925825] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:23:05.520 [2024-11-20 11:35:50.928692] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:05.520 [2024-11-20 11:35:50.928716] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:05.520 [2024-11-20 11:35:50.928723] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:05.520 [2024-11-20 11:35:50.928728] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:05.520 [2024-11-20 11:35:50.928752] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:05.520 [2024-11-20 11:35:50.928803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.520 [2024-11-20 11:35:50.928823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd47f0 with addr=10.0.0.3, port=4420 00:23:05.520 [2024-11-20 11:35:50.928833] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd47f0 is same with the state(6) to be set 00:23:05.520 [2024-11-20 11:35:50.928849] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd47f0 (9): Bad file descriptor 00:23:05.520 [2024-11-20 11:35:50.928875] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:05.520 [2024-11-20 11:35:50.928887] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:05.520 [2024-11-20 11:35:50.928897] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:05.520 [2024-11-20 11:35:50.928905] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:05.520 [2024-11-20 11:35:50.928911] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:05.520 [2024-11-20 11:35:50.928916] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:05.520 [2024-11-20 11:35:50.935669] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:23:05.520 [2024-11-20 11:35:50.935693] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:23:05.520 [2024-11-20 11:35:50.935700] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:23:05.520 [2024-11-20 11:35:50.935705] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:23:05.520 [2024-11-20 11:35:50.935728] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:23:05.520 [2024-11-20 11:35:50.935779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.520 [2024-11-20 11:35:50.935798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfffdd0 with addr=10.0.0.4, port=4420 00:23:05.520 [2024-11-20 11:35:50.935808] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfffdd0 is same with the state(6) to be set 00:23:05.520 [2024-11-20 11:35:50.935824] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfffdd0 (9): Bad file descriptor 00:23:05.520 [2024-11-20 11:35:50.935839] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:23:05.520 [2024-11-20 11:35:50.935849] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:23:05.520 [2024-11-20 11:35:50.935858] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:23:05.520 [2024-11-20 11:35:50.935866] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:23:05.520 [2024-11-20 11:35:50.935872] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:23:05.520 [2024-11-20 11:35:50.935877] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:23:05.520 [2024-11-20 11:35:50.938763] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:05.520 [2024-11-20 11:35:50.938786] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:05.520 [2024-11-20 11:35:50.938793] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:05.520 [2024-11-20 11:35:50.938798] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:05.520 [2024-11-20 11:35:50.938821] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:05.520 [2024-11-20 11:35:50.938871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.520 [2024-11-20 11:35:50.938890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd47f0 with addr=10.0.0.3, port=4420 00:23:05.520 [2024-11-20 11:35:50.938900] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd47f0 is same with the state(6) to be set 00:23:05.520 [2024-11-20 11:35:50.938916] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd47f0 (9): Bad file descriptor 00:23:05.520 [2024-11-20 11:35:50.938943] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:05.520 [2024-11-20 11:35:50.938954] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:05.520 [2024-11-20 11:35:50.938964] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:05.521 [2024-11-20 11:35:50.938972] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:05.521 [2024-11-20 11:35:50.938978] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:05.521 [2024-11-20 11:35:50.938983] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:05.521 [2024-11-20 11:35:50.945739] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:23:05.521 [2024-11-20 11:35:50.945770] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:23:05.521 [2024-11-20 11:35:50.945777] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:23:05.521 [2024-11-20 11:35:50.945783] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:23:05.521 [2024-11-20 11:35:50.945806] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:23:05.521 [2024-11-20 11:35:50.945857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.521 [2024-11-20 11:35:50.945876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfffdd0 with addr=10.0.0.4, port=4420 00:23:05.521 [2024-11-20 11:35:50.945886] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfffdd0 is same with the state(6) to be set 00:23:05.521 [2024-11-20 11:35:50.945902] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfffdd0 (9): Bad file descriptor 00:23:05.521 [2024-11-20 11:35:50.945917] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:23:05.521 [2024-11-20 11:35:50.945926] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:23:05.521 [2024-11-20 11:35:50.945935] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:23:05.521 [2024-11-20 11:35:50.945943] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:23:05.521 [2024-11-20 11:35:50.945949] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:23:05.521 [2024-11-20 11:35:50.945954] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:23:05.521 [2024-11-20 11:35:50.948832] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:05.521 [2024-11-20 11:35:50.948856] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:05.521 [2024-11-20 11:35:50.948863] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:05.521 [2024-11-20 11:35:50.948868] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:05.521 [2024-11-20 11:35:50.948891] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:05.521 [2024-11-20 11:35:50.948941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.521 [2024-11-20 11:35:50.948960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd47f0 with addr=10.0.0.3, port=4420 00:23:05.521 [2024-11-20 11:35:50.948970] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd47f0 is same with the state(6) to be set 00:23:05.521 [2024-11-20 11:35:50.948986] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd47f0 (9): Bad file descriptor 00:23:05.521 [2024-11-20 11:35:50.949011] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:05.521 [2024-11-20 11:35:50.949022] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:05.521 [2024-11-20 11:35:50.949032] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:05.521 [2024-11-20 11:35:50.949040] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:05.521 [2024-11-20 11:35:50.949045] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:05.521 [2024-11-20 11:35:50.949050] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:05.521 [2024-11-20 11:35:50.955819] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:23:05.521 [2024-11-20 11:35:50.955843] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:23:05.521 [2024-11-20 11:35:50.955850] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:23:05.521 [2024-11-20 11:35:50.955856] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:23:05.521 [2024-11-20 11:35:50.955886] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:23:05.521 [2024-11-20 11:35:50.955937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.521 [2024-11-20 11:35:50.955956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfffdd0 with addr=10.0.0.4, port=4420 00:23:05.521 [2024-11-20 11:35:50.955967] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfffdd0 is same with the state(6) to be set 00:23:05.521 [2024-11-20 11:35:50.955983] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfffdd0 (9): Bad file descriptor 00:23:05.521 [2024-11-20 11:35:50.955998] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:23:05.521 [2024-11-20 11:35:50.956007] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:23:05.521 [2024-11-20 11:35:50.956016] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:23:05.521 [2024-11-20 11:35:50.956024] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:23:05.521 [2024-11-20 11:35:50.956030] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:23:05.521 [2024-11-20 11:35:50.956035] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:23:05.521 [2024-11-20 11:35:50.958902] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:05.521 [2024-11-20 11:35:50.958925] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:05.521 [2024-11-20 11:35:50.958932] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:05.521 [2024-11-20 11:35:50.958938] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:05.521 [2024-11-20 11:35:50.958961] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:05.521 [2024-11-20 11:35:50.959010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.521 [2024-11-20 11:35:50.959040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd47f0 with addr=10.0.0.3, port=4420 00:23:05.521 [2024-11-20 11:35:50.959050] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd47f0 is same with the state(6) to be set 00:23:05.521 [2024-11-20 11:35:50.959066] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd47f0 (9): Bad file descriptor 00:23:05.521 [2024-11-20 11:35:50.959092] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:05.521 [2024-11-20 11:35:50.959104] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:05.521 [2024-11-20 11:35:50.959114] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:05.521 [2024-11-20 11:35:50.959122] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:05.521 [2024-11-20 11:35:50.959128] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:05.521 [2024-11-20 11:35:50.959132] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:05.521 [2024-11-20 11:35:50.965906] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:23:05.521 [2024-11-20 11:35:50.965934] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:23:05.521 [2024-11-20 11:35:50.965941] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:23:05.521 [2024-11-20 11:35:50.965947] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:23:05.521 [2024-11-20 11:35:50.965971] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:23:05.521 [2024-11-20 11:35:50.966027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.521 [2024-11-20 11:35:50.966047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfffdd0 with addr=10.0.0.4, port=4420 00:23:05.521 [2024-11-20 11:35:50.966058] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfffdd0 is same with the state(6) to be set 00:23:05.521 [2024-11-20 11:35:50.966075] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfffdd0 (9): Bad file descriptor 00:23:05.521 [2024-11-20 11:35:50.966090] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:23:05.521 [2024-11-20 11:35:50.966100] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:23:05.521 [2024-11-20 11:35:50.966109] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:23:05.521 [2024-11-20 11:35:50.966118] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:23:05.521 [2024-11-20 11:35:50.966123] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:23:05.521 [2024-11-20 11:35:50.966128] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:23:05.521 [2024-11-20 11:35:50.968973] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:05.521 [2024-11-20 11:35:50.968996] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:05.521 [2024-11-20 11:35:50.969003] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:05.521 [2024-11-20 11:35:50.969008] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:05.521 [2024-11-20 11:35:50.969032] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:05.521 [2024-11-20 11:35:50.969083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.521 [2024-11-20 11:35:50.969101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd47f0 with addr=10.0.0.3, port=4420 00:23:05.521 [2024-11-20 11:35:50.969112] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd47f0 is same with the state(6) to be set 00:23:05.521 [2024-11-20 11:35:50.969128] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd47f0 (9): Bad file descriptor 00:23:05.521 [2024-11-20 11:35:50.969154] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:05.521 [2024-11-20 11:35:50.969166] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:05.522 [2024-11-20 11:35:50.969176] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:05.522 [2024-11-20 11:35:50.969184] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:05.522 [2024-11-20 11:35:50.969190] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:05.522 [2024-11-20 11:35:50.969195] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:05.522 [2024-11-20 11:35:50.975982] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:23:05.522 [2024-11-20 11:35:50.976007] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:23:05.522 [2024-11-20 11:35:50.976013] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:23:05.522 [2024-11-20 11:35:50.976019] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:23:05.522 [2024-11-20 11:35:50.976042] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:23:05.522 [2024-11-20 11:35:50.976093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.522 [2024-11-20 11:35:50.976112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfffdd0 with addr=10.0.0.4, port=4420 00:23:05.522 [2024-11-20 11:35:50.976123] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfffdd0 is same with the state(6) to be set 00:23:05.522 [2024-11-20 11:35:50.976139] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfffdd0 (9): Bad file descriptor 00:23:05.522 [2024-11-20 11:35:50.976154] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:23:05.522 [2024-11-20 11:35:50.976163] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:23:05.522 [2024-11-20 11:35:50.976172] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:23:05.522 [2024-11-20 11:35:50.976181] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:23:05.522 [2024-11-20 11:35:50.976186] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:23:05.522 [2024-11-20 11:35:50.976191] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:23:05.522 [2024-11-20 11:35:50.979043] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:05.522 [2024-11-20 11:35:50.979067] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:05.522 [2024-11-20 11:35:50.979074] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:05.522 [2024-11-20 11:35:50.979079] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:05.522 [2024-11-20 11:35:50.979103] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:05.522 [2024-11-20 11:35:50.979155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.522 [2024-11-20 11:35:50.979174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd47f0 with addr=10.0.0.3, port=4420 00:23:05.522 [2024-11-20 11:35:50.979184] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd47f0 is same with the state(6) to be set 00:23:05.522 [2024-11-20 11:35:50.979200] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd47f0 (9): Bad file descriptor 00:23:05.522 [2024-11-20 11:35:50.979225] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:05.522 [2024-11-20 11:35:50.979237] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:05.522 [2024-11-20 11:35:50.979246] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:05.522 [2024-11-20 11:35:50.979254] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:05.522 [2024-11-20 11:35:50.979260] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:05.522 [2024-11-20 11:35:50.979265] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:05.522 [2024-11-20 11:35:50.986053] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:23:05.522 [2024-11-20 11:35:50.986078] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:23:05.522 [2024-11-20 11:35:50.986085] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:23:05.522 [2024-11-20 11:35:50.986090] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:23:05.522 [2024-11-20 11:35:50.986114] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:23:05.522 [2024-11-20 11:35:50.986164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.522 [2024-11-20 11:35:50.986183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfffdd0 with addr=10.0.0.4, port=4420 00:23:05.522 [2024-11-20 11:35:50.986194] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfffdd0 is same with the state(6) to be set 00:23:05.522 [2024-11-20 11:35:50.986210] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfffdd0 (9): Bad file descriptor 00:23:05.522 [2024-11-20 11:35:50.986225] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:23:05.522 [2024-11-20 11:35:50.986234] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:23:05.522 [2024-11-20 11:35:50.986243] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:23:05.522 [2024-11-20 11:35:50.986251] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:23:05.522 [2024-11-20 11:35:50.986257] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:23:05.522 [2024-11-20 11:35:50.986262] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:23:05.522 [2024-11-20 11:35:50.989112] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:05.522 [2024-11-20 11:35:50.989137] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:05.522 [2024-11-20 11:35:50.989143] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:05.522 [2024-11-20 11:35:50.989149] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:05.522 [2024-11-20 11:35:50.989173] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:05.522 [2024-11-20 11:35:50.989222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.522 [2024-11-20 11:35:50.989241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd47f0 with addr=10.0.0.3, port=4420 00:23:05.522 [2024-11-20 11:35:50.989252] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd47f0 is same with the state(6) to be set 00:23:05.522 [2024-11-20 11:35:50.989268] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd47f0 (9): Bad file descriptor 00:23:05.522 [2024-11-20 11:35:50.989293] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:05.522 [2024-11-20 11:35:50.989304] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:05.522 [2024-11-20 11:35:50.989314] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:05.522 [2024-11-20 11:35:50.989322] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:05.522 [2024-11-20 11:35:50.989327] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:05.522 [2024-11-20 11:35:50.989332] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:05.522 [2024-11-20 11:35:50.996125] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:23:05.522 [2024-11-20 11:35:50.996149] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:23:05.522 [2024-11-20 11:35:50.996156] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:23:05.522 [2024-11-20 11:35:50.996161] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:23:05.522 [2024-11-20 11:35:50.996184] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:23:05.522 [2024-11-20 11:35:50.996234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.522 [2024-11-20 11:35:50.996253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfffdd0 with addr=10.0.0.4, port=4420 00:23:05.522 [2024-11-20 11:35:50.996263] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfffdd0 is same with the state(6) to be set 00:23:05.522 [2024-11-20 11:35:50.996279] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfffdd0 (9): Bad file descriptor 00:23:05.522 [2024-11-20 11:35:50.996294] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:23:05.522 [2024-11-20 11:35:50.996303] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:23:05.522 [2024-11-20 11:35:50.996313] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:23:05.522 [2024-11-20 11:35:50.996321] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:23:05.522 [2024-11-20 11:35:50.996327] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:23:05.522 [2024-11-20 11:35:50.996332] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:23:05.522 [2024-11-20 11:35:50.999183] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:05.522 [2024-11-20 11:35:50.999207] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:05.522 [2024-11-20 11:35:50.999213] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:05.522 [2024-11-20 11:35:50.999219] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:05.522 [2024-11-20 11:35:50.999241] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:05.522 [2024-11-20 11:35:50.999291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.522 [2024-11-20 11:35:50.999310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd47f0 with addr=10.0.0.3, port=4420 00:23:05.522 [2024-11-20 11:35:50.999320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd47f0 is same with the state(6) to be set 00:23:05.522 [2024-11-20 11:35:50.999336] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd47f0 (9): Bad file descriptor 00:23:05.522 [2024-11-20 11:35:50.999362] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:05.523 [2024-11-20 11:35:50.999374] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:05.523 [2024-11-20 11:35:50.999383] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:05.523 [2024-11-20 11:35:50.999391] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:05.523 [2024-11-20 11:35:50.999397] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:05.523 [2024-11-20 11:35:50.999402] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:05.523 [2024-11-20 11:35:51.003450] bdev_nvme.c:7266:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:23:05.523 [2024-11-20 11:35:51.003482] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:23:05.523 [2024-11-20 11:35:51.003509] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:05.523 [2024-11-20 11:35:51.003546] bdev_nvme.c:7266:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 not found 00:23:05.523 [2024-11-20 11:35:51.003562] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:23:05.523 [2024-11-20 11:35:51.003575] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:23:05.523 [2024-11-20 11:35:51.089539] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:23:05.523 [2024-11-20 11:35:51.090532] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:23:06.458 11:35:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@199 -- # get_subsystem_names 00:23:06.459 11:35:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:06.459 11:35:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.459 11:35:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:06.459 11:35:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:23:06.459 11:35:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:23:06.459 11:35:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:23:06.459 11:35:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.459 11:35:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@199 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:23:06.459 11:35:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@200 -- # get_bdev_list 00:23:06.459 11:35:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:06.459 11:35:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:23:06.459 11:35:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.459 11:35:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:23:06.459 11:35:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:06.459 11:35:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:23:06.459 11:35:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.459 11:35:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@200 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:06.459 11:35:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@201 -- # get_subsystem_paths mdns0_nvme0 00:23:06.459 11:35:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:06.459 11:35:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:23:06.459 11:35:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:23:06.459 11:35:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.459 11:35:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:23:06.459 11:35:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:06.459 11:35:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.717 11:35:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@201 -- # [[ 4421 == \4\4\2\1 ]] 00:23:06.717 11:35:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@202 -- # get_subsystem_paths mdns1_nvme0 00:23:06.717 11:35:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:23:06.717 11:35:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.717 11:35:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:06.717 11:35:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:06.717 11:35:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:23:06.717 11:35:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:23:06.717 11:35:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.717 11:35:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@202 -- # [[ 4421 == \4\4\2\1 ]] 00:23:06.717 11:35:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@203 -- # get_notification_count 00:23:06.717 11:35:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:23:06.717 11:35:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.717 11:35:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:23:06.717 11:35:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:06.717 11:35:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.717 11:35:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=0 00:23:06.717 11:35:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=4 00:23:06.717 11:35:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@204 -- # [[ 0 == 0 ]] 00:23:06.717 11:35:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@206 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:23:06.717 11:35:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.717 11:35:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:06.717 11:35:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.717 11:35:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@207 -- # sleep 1 00:23:06.717 [2024-11-20 11:35:52.228368] bdev_mdns_client.c: 425:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:23:07.785 11:35:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@209 -- # get_mdns_discovery_svcs 00:23:07.785 11:35:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:23:07.785 11:35:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.785 11:35:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:07.785 11:35:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:23:07.785 11:35:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:23:07.785 11:35:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:23:07.785 11:35:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.785 11:35:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@209 -- # [[ '' == '' ]] 00:23:07.785 11:35:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@210 -- # get_subsystem_names 00:23:07.785 11:35:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:07.785 11:35:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:23:07.785 11:35:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.785 11:35:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:07.785 11:35:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:23:07.785 11:35:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:23:07.785 11:35:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.785 11:35:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@210 -- # [[ '' == '' ]] 00:23:07.785 11:35:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@211 -- # get_bdev_list 00:23:07.785 11:35:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:07.785 11:35:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:23:07.785 11:35:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.786 11:35:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:07.786 11:35:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:23:07.786 11:35:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:23:07.786 11:35:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.044 11:35:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@211 -- # [[ '' == '' ]] 00:23:08.044 11:35:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@212 -- # get_notification_count 00:23:08.044 11:35:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:23:08.044 11:35:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:23:08.044 11:35:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.044 11:35:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:08.044 11:35:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.044 11:35:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=4 00:23:08.044 11:35:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=8 00:23:08.044 11:35:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@213 -- # [[ 4 == 4 ]] 00:23:08.044 11:35:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@216 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:08.044 11:35:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.044 11:35:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:08.044 11:35:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.044 11:35:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@217 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:23:08.044 11:35:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@652 -- # local es=0 00:23:08.045 11:35:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:23:08.045 11:35:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:08.045 11:35:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:08.045 11:35:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:08.045 11:35:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:08.045 11:35:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:23:08.045 11:35:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.045 11:35:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:08.045 [2024-11-20 11:35:53.445314] bdev_mdns_client.c: 471:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:23:08.045 2024/11/20 11:35:53 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:23:08.045 request: 00:23:08.045 { 00:23:08.045 "method": "bdev_nvme_start_mdns_discovery", 00:23:08.045 "params": { 00:23:08.045 "name": "mdns", 00:23:08.045 "svcname": "_nvme-disc._http", 00:23:08.045 "hostnqn": "nqn.2021-12.io.spdk:test" 00:23:08.045 } 00:23:08.045 } 00:23:08.045 Got JSON-RPC error response 00:23:08.045 GoRPCClient: error on JSON-RPC call 00:23:08.045 11:35:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:08.045 11:35:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@655 -- # es=1 00:23:08.045 11:35:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:08.045 11:35:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:08.045 11:35:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:08.045 11:35:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@218 -- # sleep 5 00:23:08.610 [2024-11-20 11:35:54.033970] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:23:08.610 [2024-11-20 11:35:54.133965] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:23:08.867 [2024-11-20 11:35:54.233977] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:23:08.867 [2024-11-20 11:35:54.234020] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:23:08.867 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:23:08.867 cookie is 0 00:23:08.867 is_local: 1 00:23:08.867 our_own: 0 00:23:08.867 wide_area: 0 00:23:08.867 multicast: 1 00:23:08.867 cached: 1 00:23:08.867 [2024-11-20 11:35:54.333983] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:23:08.867 [2024-11-20 11:35:54.334029] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:23:08.867 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:23:08.867 cookie is 0 00:23:08.867 is_local: 1 00:23:08.867 our_own: 0 00:23:08.867 wide_area: 0 00:23:08.867 multicast: 1 00:23:08.867 cached: 1 00:23:08.867 [2024-11-20 11:35:54.334045] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.4 trid->trsvcid: 8009 00:23:08.867 [2024-11-20 11:35:54.433974] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:23:08.867 [2024-11-20 11:35:54.434014] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:23:08.867 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:23:08.867 cookie is 0 00:23:08.867 is_local: 1 00:23:08.867 our_own: 0 00:23:08.867 wide_area: 0 00:23:08.867 multicast: 1 00:23:08.867 cached: 1 00:23:09.125 [2024-11-20 11:35:54.533975] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:23:09.125 [2024-11-20 11:35:54.534018] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:23:09.125 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:23:09.125 cookie is 0 00:23:09.125 is_local: 1 00:23:09.125 our_own: 0 00:23:09.125 wide_area: 0 00:23:09.125 multicast: 1 00:23:09.125 cached: 1 00:23:09.125 [2024-11-20 11:35:54.534032] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:23:09.692 [2024-11-20 11:35:55.245790] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr attached 00:23:09.692 [2024-11-20 11:35:55.245829] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr connected 00:23:09.692 [2024-11-20 11:35:55.245849] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:23:09.950 [2024-11-20 11:35:55.331931] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 new subsystem mdns0_nvme0 00:23:09.950 [2024-11-20 11:35:55.390402] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 3] ctrlr was created to 10.0.0.4:4421 00:23:09.950 [2024-11-20 11:35:55.391068] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode20, 3] Connecting qpair 0x1016930:1 started. 00:23:09.950 [2024-11-20 11:35:55.392526] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.4:8009] attach mdns0_nvme0 done 00:23:09.950 [2024-11-20 11:35:55.392556] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:23:09.950 [2024-11-20 11:35:55.394584] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode20, 3] qpair 0x1016930 was disconnected and freed. delete nvme_qpair. 00:23:09.950 [2024-11-20 11:35:55.445747] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:23:09.950 [2024-11-20 11:35:55.445789] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:23:09.950 [2024-11-20 11:35:55.445809] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:09.950 [2024-11-20 11:35:55.531904] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem mdns1_nvme0 00:23:10.208 [2024-11-20 11:35:55.590362] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:23:10.208 [2024-11-20 11:35:55.591024] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x10181f0:1 started. 00:23:10.208 [2024-11-20 11:35:55.592471] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns1_nvme0 done 00:23:10.208 [2024-11-20 11:35:55.592501] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:23:10.208 [2024-11-20 11:35:55.594601] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x10181f0 was disconnected and freed. delete nvme_qpair. 00:23:13.492 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@220 -- # get_mdns_discovery_svcs 00:23:13.492 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:23:13.492 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.492 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:13.492 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:23:13.492 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:23:13.492 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:23:13.492 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.492 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@220 -- # [[ mdns == \m\d\n\s ]] 00:23:13.492 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@221 -- # get_discovery_ctrlrs 00:23:13.492 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:13.492 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.492 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:23:13.492 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:13.492 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:23:13.492 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:23:13.492 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.492 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@221 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:23:13.492 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@222 -- # get_bdev_list 00:23:13.492 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:13.493 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.493 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:13.493 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:23:13.493 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:23:13.493 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:23:13.493 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.493 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@222 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:13.493 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@225 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:13.493 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@652 -- # local es=0 00:23:13.493 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:13.493 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:13.493 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:13.493 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:13.493 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:13.493 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:13.493 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.493 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:13.493 [2024-11-20 11:35:58.647662] bdev_mdns_client.c: 476:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:23:13.493 2024/11/20 11:35:58 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:23:13.493 request: 00:23:13.493 { 00:23:13.493 "method": "bdev_nvme_start_mdns_discovery", 00:23:13.493 "params": { 00:23:13.493 "name": "cdc", 00:23:13.493 "svcname": "_nvme-disc._tcp", 00:23:13.493 "hostnqn": "nqn.2021-12.io.spdk:test" 00:23:13.493 } 00:23:13.493 } 00:23:13.493 Got JSON-RPC error response 00:23:13.493 GoRPCClient: error on JSON-RPC call 00:23:13.493 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:13.493 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@655 -- # es=1 00:23:13.493 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:13.493 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:13.493 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:13.493 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@226 -- # get_discovery_ctrlrs 00:23:13.493 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:13.493 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.493 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:23:13.493 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:13.493 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:23:13.493 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:23:13.493 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.493 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@226 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:23:13.493 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@227 -- # get_bdev_list 00:23:13.493 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:13.493 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.493 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:13.493 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:23:13.493 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:23:13.493 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:23:13.493 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.493 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@227 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:13.493 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@228 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:23:13.493 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.493 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:13.493 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.493 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@231 -- # check_mdns_request_exists spdk1 10.0.0.3 8009 found 00:23:13.493 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:23:13.493 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.3 00:23:13.493 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:23:13.493 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local check_type=found 00:23:13.493 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:23:13.493 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:23:13.493 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:23:13.493 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:23:13.493 +;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:23:13.493 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:23:13.493 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:23:13.493 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:23:13.493 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:23:13.493 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:23:13.493 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:23:13.493 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:23:13.493 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:23:13.493 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\3* ]] 00:23:13.493 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:23:13.493 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:23:13.493 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:23:13.493 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:23:13.493 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\3* ]] 00:23:13.493 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:23:13.493 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:23:13.493 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:23:13.493 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:23:13.493 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\1\0\.\0\.\0\.\3* ]] 00:23:13.493 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:23:13.494 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:23:13.494 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:23:13.494 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:23:13.494 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\1\0\.\0\.\0\.\3* ]] 00:23:13.494 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\8\0\0\9* ]] 00:23:13.494 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # [[ found == \f\o\u\n\d ]] 00:23:13.494 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@98 -- # return 0 00:23:13.494 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@232 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:23:13.494 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.494 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:13.494 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.494 11:35:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@234 -- # sleep 1 00:23:13.494 [2024-11-20 11:35:58.833962] bdev_mdns_client.c: 425:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:23:14.429 11:35:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@236 -- # check_mdns_request_exists spdk1 10.0.0.3 8009 'not found' 00:23:14.429 11:35:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:23:14.429 11:35:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.3 00:23:14.429 11:35:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:23:14.429 11:35:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local 'check_type=not found' 00:23:14.429 11:35:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:23:14.429 11:35:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:23:14.429 11:35:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:23:14.429 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:23:14.429 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:23:14.429 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:23:14.429 11:35:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:23:14.429 11:35:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:23:14.429 11:35:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:23:14.429 11:35:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:23:14.429 11:35:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:23:14.429 11:35:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:23:14.429 11:35:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:23:14.429 11:35:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:23:14.429 11:35:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:23:14.429 11:35:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@105 -- # [[ not found == \f\o\u\n\d ]] 00:23:14.429 11:35:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@108 -- # return 0 00:23:14.429 11:35:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@238 -- # rpc_cmd nvmf_stop_mdns_prr 00:23:14.429 11:35:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.429 11:35:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:14.429 11:35:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.429 11:35:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@240 -- # trap - SIGINT SIGTERM EXIT 00:23:14.429 11:35:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@242 -- # kill 94722 00:23:14.429 11:35:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@245 -- # wait 94722 00:23:14.429 11:35:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@246 -- # kill 94738 00:23:14.429 11:35:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@247 -- # nvmftestfini 00:23:14.429 11:35:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:14.429 11:35:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@121 -- # sync 00:23:14.429 Got SIGTERM, quitting. 00:23:14.429 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.4. 00:23:14.429 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.3. 00:23:14.429 avahi-daemon 0.8 exiting. 00:23:14.688 11:36:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:14.688 11:36:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@124 -- # set +e 00:23:14.688 11:36:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:14.688 11:36:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:14.688 rmmod nvme_tcp 00:23:14.688 rmmod nvme_fabrics 00:23:14.688 rmmod nvme_keyring 00:23:14.688 11:36:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:14.688 11:36:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@128 -- # set -e 00:23:14.688 11:36:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@129 -- # return 0 00:23:14.688 11:36:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@517 -- # '[' -n 94666 ']' 00:23:14.688 11:36:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@518 -- # killprocess 94666 00:23:14.688 11:36:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@954 -- # '[' -z 94666 ']' 00:23:14.688 11:36:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@958 -- # kill -0 94666 00:23:14.688 11:36:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@959 -- # uname 00:23:14.688 11:36:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:14.688 11:36:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94666 00:23:14.688 11:36:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:14.688 11:36:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:14.688 killing process with pid 94666 00:23:14.688 11:36:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94666' 00:23:14.688 11:36:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@973 -- # kill 94666 00:23:14.688 11:36:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@978 -- # wait 94666 00:23:14.688 11:36:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:14.688 11:36:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:14.688 11:36:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:14.688 11:36:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@297 -- # iptr 00:23:14.688 11:36:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@791 -- # iptables-save 00:23:14.688 11:36:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:23:14.688 11:36:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:14.947 11:36:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:14.947 11:36:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:14.947 11:36:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:14.947 11:36:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:14.947 11:36:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:14.947 11:36:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:14.947 11:36:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:14.947 11:36:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:14.947 11:36:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:14.947 11:36:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:14.947 11:36:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:14.947 11:36:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:14.947 11:36:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:14.947 11:36:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:14.947 11:36:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:14.947 11:36:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:14.947 11:36:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:14.947 11:36:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:14.947 11:36:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:14.947 11:36:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@300 -- # return 0 00:23:14.947 00:23:14.947 real 0m22.440s 00:23:14.947 user 0m43.706s 00:23:14.947 sys 0m1.967s 00:23:14.947 11:36:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:14.947 11:36:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:14.947 ************************************ 00:23:14.947 END TEST nvmf_mdns_discovery 00:23:14.947 ************************************ 00:23:15.207 11:36:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:23:15.207 11:36:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:23:15.207 11:36:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:15.207 11:36:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:15.207 11:36:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.207 ************************************ 00:23:15.207 START TEST nvmf_host_multipath 00:23:15.207 ************************************ 00:23:15.207 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:23:15.207 * Looking for test storage... 00:23:15.207 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:15.207 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:15.207 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:23:15.207 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:15.207 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:15.207 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:15.207 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:15.207 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:15.207 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:23:15.207 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:23:15.207 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:23:15.207 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:23:15.207 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:23:15.207 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:23:15.207 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:23:15.207 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:15.207 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:23:15.207 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:23:15.207 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:15.207 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:15.207 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:23:15.207 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:23:15.207 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:15.207 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:23:15.207 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:23:15.207 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:23:15.207 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:23:15.207 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:15.207 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:23:15.207 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:23:15.207 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:15.207 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:15.207 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:23:15.207 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:15.207 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:15.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.207 --rc genhtml_branch_coverage=1 00:23:15.208 --rc genhtml_function_coverage=1 00:23:15.208 --rc genhtml_legend=1 00:23:15.208 --rc geninfo_all_blocks=1 00:23:15.208 --rc geninfo_unexecuted_blocks=1 00:23:15.208 00:23:15.208 ' 00:23:15.208 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:15.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.208 --rc genhtml_branch_coverage=1 00:23:15.208 --rc genhtml_function_coverage=1 00:23:15.208 --rc genhtml_legend=1 00:23:15.208 --rc geninfo_all_blocks=1 00:23:15.208 --rc geninfo_unexecuted_blocks=1 00:23:15.208 00:23:15.208 ' 00:23:15.208 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:15.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.208 --rc genhtml_branch_coverage=1 00:23:15.208 --rc genhtml_function_coverage=1 00:23:15.208 --rc genhtml_legend=1 00:23:15.208 --rc geninfo_all_blocks=1 00:23:15.208 --rc geninfo_unexecuted_blocks=1 00:23:15.208 00:23:15.208 ' 00:23:15.208 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:15.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.208 --rc genhtml_branch_coverage=1 00:23:15.208 --rc genhtml_function_coverage=1 00:23:15.208 --rc genhtml_legend=1 00:23:15.208 --rc geninfo_all_blocks=1 00:23:15.208 --rc geninfo_unexecuted_blocks=1 00:23:15.208 00:23:15.208 ' 00:23:15.208 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:15.208 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:23:15.208 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:15.208 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:15.208 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:15.208 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:15.208 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:15.208 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:15.208 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:15.208 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:15.208 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:15.208 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:15.208 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:23:15.208 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=806d442c-08ba-4719-b76d-33169283459a 00:23:15.208 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:15.208 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:15.208 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:15.208 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:15.208 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:15.208 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:23:15.208 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:15.208 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:15.208 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:15.208 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.208 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.208 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.208 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:23:15.208 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.208 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:23:15.208 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:15.208 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:15.208 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:15.208 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:15.208 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:15.208 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:15.208 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:15.208 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:15.208 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:15.208 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:15.208 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:15.208 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:15.208 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:15.208 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:23:15.208 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:15.208 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:15.208 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:23:15.208 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:15.208 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:15.208 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:15.208 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:15.208 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:15.208 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:15.208 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:15.208 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:15.208 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:23:15.208 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:23:15.208 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:23:15.208 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:23:15.208 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:23:15.208 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:23:15.208 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:15.208 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:15.208 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:15.208 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:15.208 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:15.208 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:15.208 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:15.208 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:15.208 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:15.208 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:15.208 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:15.208 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:15.208 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:15.208 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:15.208 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:15.208 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:15.209 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:15.209 Cannot find device "nvmf_init_br" 00:23:15.209 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:23:15.209 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:15.468 Cannot find device "nvmf_init_br2" 00:23:15.468 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:23:15.468 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:15.468 Cannot find device "nvmf_tgt_br" 00:23:15.468 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:23:15.468 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:15.468 Cannot find device "nvmf_tgt_br2" 00:23:15.468 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:23:15.468 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:15.468 Cannot find device "nvmf_init_br" 00:23:15.468 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:23:15.468 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:15.468 Cannot find device "nvmf_init_br2" 00:23:15.468 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:23:15.468 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:15.468 Cannot find device "nvmf_tgt_br" 00:23:15.468 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:23:15.468 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:15.468 Cannot find device "nvmf_tgt_br2" 00:23:15.468 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:23:15.468 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:15.468 Cannot find device "nvmf_br" 00:23:15.468 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:23:15.468 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:15.468 Cannot find device "nvmf_init_if" 00:23:15.468 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:23:15.468 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:15.468 Cannot find device "nvmf_init_if2" 00:23:15.468 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:23:15.468 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:15.468 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:15.468 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:23:15.468 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:15.468 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:15.468 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:23:15.468 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:15.468 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:15.468 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:15.468 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:15.468 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:15.468 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:15.468 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:15.468 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:15.468 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:15.468 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:15.468 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:15.468 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:15.468 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:15.468 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:15.468 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:15.468 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:15.468 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:15.468 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:15.468 11:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:15.468 11:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:15.468 11:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:15.468 11:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:15.468 11:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:15.468 11:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:15.727 11:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:15.727 11:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:15.727 11:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:15.727 11:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:15.727 11:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:15.727 11:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:15.727 11:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:15.727 11:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:15.727 11:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:15.727 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:15.727 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:23:15.727 00:23:15.727 --- 10.0.0.3 ping statistics --- 00:23:15.727 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:15.727 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:23:15.727 11:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:15.727 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:15.727 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms 00:23:15.727 00:23:15.727 --- 10.0.0.4 ping statistics --- 00:23:15.727 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:15.727 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:23:15.727 11:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:15.727 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:15.727 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:23:15.727 00:23:15.727 --- 10.0.0.1 ping statistics --- 00:23:15.727 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:15.727 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:23:15.727 11:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:15.727 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:15.727 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:23:15.727 00:23:15.727 --- 10.0.0.2 ping statistics --- 00:23:15.727 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:15.727 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:23:15.727 11:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:15.727 11:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@461 -- # return 0 00:23:15.727 11:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:15.727 11:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:15.727 11:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:15.727 11:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:15.728 11:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:15.728 11:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:15.728 11:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:15.728 11:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:23:15.728 11:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:15.728 11:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:15.728 11:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:23:15.728 11:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # nvmfpid=95379 00:23:15.728 11:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # waitforlisten 95379 00:23:15.728 11:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 95379 ']' 00:23:15.728 11:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:15.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:15.728 11:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:15.728 11:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:15.728 11:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:15.728 11:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:15.728 11:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:23:15.728 [2024-11-20 11:36:01.199388] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:23:15.728 [2024-11-20 11:36:01.199484] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:15.986 [2024-11-20 11:36:01.348371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:15.986 [2024-11-20 11:36:01.386019] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:15.986 [2024-11-20 11:36:01.386275] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:15.986 [2024-11-20 11:36:01.386499] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:15.986 [2024-11-20 11:36:01.386732] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:15.986 [2024-11-20 11:36:01.386889] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:15.986 [2024-11-20 11:36:01.387912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:15.986 [2024-11-20 11:36:01.387926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:15.986 11:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:15.986 11:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:23:15.986 11:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:15.986 11:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:15.986 11:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:23:15.986 11:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:15.986 11:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=95379 00:23:15.986 11:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:16.245 [2024-11-20 11:36:01.817713] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:16.504 11:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:16.763 Malloc0 00:23:16.763 11:36:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:23:17.021 11:36:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:17.293 11:36:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:17.553 [2024-11-20 11:36:02.958060] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:17.553 11:36:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:23:17.812 [2024-11-20 11:36:03.218210] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:23:17.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:17.812 11:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=95469 00:23:17.812 11:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:17.812 11:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:17.812 11:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 95469 /var/tmp/bdevperf.sock 00:23:17.812 11:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 95469 ']' 00:23:17.812 11:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:17.812 11:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:17.812 11:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:17.812 11:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:17.812 11:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:23:18.071 11:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:18.071 11:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:23:18.071 11:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:18.330 11:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:18.898 Nvme0n1 00:23:18.898 11:36:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:19.156 Nvme0n1 00:23:19.156 11:36:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:23:19.156 11:36:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:23:20.091 11:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:23:20.091 11:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:23:20.658 11:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:23:20.916 11:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:23:20.916 11:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95543 00:23:20.916 11:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95379 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:20.916 11:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:23:27.480 11:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:23:27.480 11:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:27.480 11:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:23:27.480 11:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:27.480 Attaching 4 probes... 00:23:27.480 @path[10.0.0.3, 4421]: 16446 00:23:27.480 @path[10.0.0.3, 4421]: 17057 00:23:27.480 @path[10.0.0.3, 4421]: 17071 00:23:27.480 @path[10.0.0.3, 4421]: 17181 00:23:27.480 @path[10.0.0.3, 4421]: 17176 00:23:27.480 11:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:27.480 11:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:23:27.480 11:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:23:27.480 11:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:23:27.480 11:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:23:27.480 11:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:23:27.480 11:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95543 00:23:27.480 11:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:27.480 11:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:23:27.480 11:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:23:27.480 11:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:23:27.739 11:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:23:27.739 11:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95681 00:23:27.739 11:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:23:27.739 11:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95379 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:34.303 11:36:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:23:34.303 11:36:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:34.303 11:36:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:23:34.303 11:36:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:34.303 Attaching 4 probes... 00:23:34.303 @path[10.0.0.3, 4420]: 16548 00:23:34.303 @path[10.0.0.3, 4420]: 17135 00:23:34.303 @path[10.0.0.3, 4420]: 16652 00:23:34.303 @path[10.0.0.3, 4420]: 17180 00:23:34.303 @path[10.0.0.3, 4420]: 16879 00:23:34.303 11:36:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:34.303 11:36:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:23:34.303 11:36:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:23:34.303 11:36:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:23:34.303 11:36:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:23:34.303 11:36:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:23:34.303 11:36:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95681 00:23:34.303 11:36:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:34.303 11:36:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:23:34.303 11:36:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:23:34.562 11:36:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:23:34.822 11:36:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:23:34.822 11:36:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95813 00:23:34.822 11:36:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:23:34.822 11:36:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95379 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:41.383 11:36:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:23:41.383 11:36:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:41.383 11:36:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:23:41.383 11:36:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:41.383 Attaching 4 probes... 00:23:41.383 @path[10.0.0.3, 4421]: 12647 00:23:41.383 @path[10.0.0.3, 4421]: 16452 00:23:41.383 @path[10.0.0.3, 4421]: 17119 00:23:41.383 @path[10.0.0.3, 4421]: 17192 00:23:41.383 @path[10.0.0.3, 4421]: 17094 00:23:41.383 11:36:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:41.383 11:36:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:23:41.383 11:36:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:23:41.383 11:36:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:23:41.384 11:36:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:23:41.384 11:36:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:23:41.384 11:36:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95813 00:23:41.384 11:36:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:41.384 11:36:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:23:41.384 11:36:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:23:41.384 11:36:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:23:41.951 11:36:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:23:41.951 11:36:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95944 00:23:41.951 11:36:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95379 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:41.951 11:36:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:23:48.514 11:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:48.514 11:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:23:48.514 11:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:23:48.514 11:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:48.514 Attaching 4 probes... 00:23:48.514 00:23:48.514 00:23:48.514 00:23:48.514 00:23:48.514 00:23:48.514 11:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:23:48.514 11:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:48.514 11:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:23:48.514 11:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:23:48.514 11:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:23:48.514 11:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:23:48.514 11:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95944 00:23:48.514 11:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:48.514 11:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:23:48.514 11:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:23:48.514 11:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:23:48.776 11:36:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:23:48.776 11:36:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96076 00:23:48.776 11:36:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:23:48.776 11:36:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95379 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:55.358 11:36:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:55.358 11:36:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:23:55.358 11:36:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:23:55.358 11:36:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:55.358 Attaching 4 probes... 00:23:55.358 @path[10.0.0.3, 4421]: 16686 00:23:55.358 @path[10.0.0.3, 4421]: 16910 00:23:55.358 @path[10.0.0.3, 4421]: 14718 00:23:55.358 @path[10.0.0.3, 4421]: 16088 00:23:55.358 @path[10.0.0.3, 4421]: 15001 00:23:55.358 11:36:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:55.358 11:36:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:23:55.358 11:36:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:23:55.358 11:36:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:23:55.358 11:36:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:23:55.358 11:36:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:23:55.358 11:36:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96076 00:23:55.358 11:36:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:55.358 11:36:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:23:55.617 [2024-11-20 11:36:40.977230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad5e90 is same with the state(6) to be set 00:23:55.617 [2024-11-20 11:36:40.977282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad5e90 is same with the state(6) to be set 00:23:55.617 [2024-11-20 11:36:40.977295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad5e90 is same with the state(6) to be set 00:23:55.617 [2024-11-20 11:36:40.977304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad5e90 is same with the state(6) to be set 00:23:55.617 [2024-11-20 11:36:40.977313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad5e90 is same with the state(6) to be set 00:23:55.617 [2024-11-20 11:36:40.977322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad5e90 is same with the state(6) to be set 00:23:55.617 [2024-11-20 11:36:40.977330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad5e90 is same with the state(6) to be set 00:23:55.617 [2024-11-20 11:36:40.977338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad5e90 is same with the state(6) to be set 00:23:55.617 [2024-11-20 11:36:40.977346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad5e90 is same with the state(6) to be set 00:23:55.617 [2024-11-20 11:36:40.977354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad5e90 is same with the state(6) to be set 00:23:55.617 [2024-11-20 11:36:40.977362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad5e90 is same with the state(6) to be set 00:23:55.617 [2024-11-20 11:36:40.977371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad5e90 is same with the state(6) to be set 00:23:55.617 [2024-11-20 11:36:40.977379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad5e90 is same with the state(6) to be set 00:23:55.617 [2024-11-20 11:36:40.977387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad5e90 is same with the state(6) to be set 00:23:55.617 [2024-11-20 11:36:40.977395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad5e90 is same with the state(6) to be set 00:23:55.617 [2024-11-20 11:36:40.977403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad5e90 is same with the state(6) to be set 00:23:55.617 [2024-11-20 11:36:40.977412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad5e90 is same with the state(6) to be set 00:23:55.617 [2024-11-20 11:36:40.977420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad5e90 is same with the state(6) to be set 00:23:55.617 [2024-11-20 11:36:40.977427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad5e90 is same with the state(6) to be set 00:23:55.617 [2024-11-20 11:36:40.977436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad5e90 is same with the state(6) to be set 00:23:55.617 [2024-11-20 11:36:40.977444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad5e90 is same with the state(6) to be set 00:23:55.617 [2024-11-20 11:36:40.977452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad5e90 is same with the state(6) to be set 00:23:55.617 11:36:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:23:56.553 11:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:23:56.553 11:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96216 00:23:56.553 11:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:23:56.553 11:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95379 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:03.113 11:36:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:03.113 11:36:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:24:03.113 11:36:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:24:03.114 11:36:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:03.114 Attaching 4 probes... 00:24:03.114 @path[10.0.0.3, 4420]: 14667 00:24:03.114 @path[10.0.0.3, 4420]: 16033 00:24:03.114 @path[10.0.0.3, 4420]: 16313 00:24:03.114 @path[10.0.0.3, 4420]: 16468 00:24:03.114 @path[10.0.0.3, 4420]: 15982 00:24:03.114 11:36:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:24:03.114 11:36:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:24:03.114 11:36:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:03.114 11:36:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:24:03.114 11:36:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:24:03.114 11:36:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:24:03.114 11:36:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96216 00:24:03.114 11:36:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:03.114 11:36:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:24:03.114 [2024-11-20 11:36:48.672915] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:24:03.114 11:36:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:24:03.682 11:36:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:24:10.252 11:36:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:24:10.252 11:36:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96409 00:24:10.252 11:36:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:24:10.252 11:36:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95379 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:15.518 11:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:15.518 11:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:24:15.777 11:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:24:15.777 11:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:15.777 Attaching 4 probes... 00:24:15.777 @path[10.0.0.3, 4421]: 16412 00:24:15.777 @path[10.0.0.3, 4421]: 16512 00:24:15.777 @path[10.0.0.3, 4421]: 16403 00:24:15.777 @path[10.0.0.3, 4421]: 16470 00:24:15.777 @path[10.0.0.3, 4421]: 16280 00:24:15.777 11:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:24:15.777 11:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:15.777 11:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:24:15.777 11:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:24:15.777 11:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:24:15.777 11:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:24:15.777 11:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96409 00:24:15.777 11:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:15.777 11:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 95469 00:24:15.777 11:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 95469 ']' 00:24:15.777 11:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 95469 00:24:15.777 11:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:24:15.777 11:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:15.777 11:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95469 00:24:15.777 killing process with pid 95469 00:24:15.777 11:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:15.777 11:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:15.777 11:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95469' 00:24:15.777 11:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 95469 00:24:15.777 11:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 95469 00:24:15.777 { 00:24:15.777 "results": [ 00:24:15.777 { 00:24:15.777 "job": "Nvme0n1", 00:24:15.777 "core_mask": "0x4", 00:24:15.777 "workload": "verify", 00:24:15.777 "status": "terminated", 00:24:15.777 "verify_range": { 00:24:15.777 "start": 0, 00:24:15.777 "length": 16384 00:24:15.777 }, 00:24:15.777 "queue_depth": 128, 00:24:15.777 "io_size": 4096, 00:24:15.777 "runtime": 56.54839, 00:24:15.777 "iops": 7116.948864503481, 00:24:15.777 "mibps": 27.80058150196672, 00:24:15.777 "io_failed": 0, 00:24:15.777 "io_timeout": 0, 00:24:15.777 "avg_latency_us": 17953.7453319515, 00:24:15.777 "min_latency_us": 826.6472727272727, 00:24:15.777 "max_latency_us": 7046430.72 00:24:15.777 } 00:24:15.777 ], 00:24:15.777 "core_count": 1 00:24:15.777 } 00:24:16.043 11:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 95469 00:24:16.043 11:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:16.043 [2024-11-20 11:36:03.312175] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:24:16.043 [2024-11-20 11:36:03.312328] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95469 ] 00:24:16.044 [2024-11-20 11:36:03.467457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:16.044 [2024-11-20 11:36:03.500415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:16.044 Running I/O for 90 seconds... 00:24:16.044 8906.00 IOPS, 34.79 MiB/s [2024-11-20T11:37:01.633Z] 8905.00 IOPS, 34.79 MiB/s [2024-11-20T11:37:01.633Z] 8768.33 IOPS, 34.25 MiB/s [2024-11-20T11:37:01.633Z] 8703.75 IOPS, 34.00 MiB/s [2024-11-20T11:37:01.633Z] 8668.20 IOPS, 33.86 MiB/s [2024-11-20T11:37:01.633Z] 8659.83 IOPS, 33.83 MiB/s [2024-11-20T11:37:01.633Z] 8647.14 IOPS, 33.78 MiB/s [2024-11-20T11:37:01.633Z] 8646.62 IOPS, 33.78 MiB/s [2024-11-20T11:37:01.633Z] [2024-11-20 11:36:13.257981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:62048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.044 [2024-11-20 11:36:13.258054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:16.044 [2024-11-20 11:36:13.258117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:62312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.044 [2024-11-20 11:36:13.258140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:16.044 [2024-11-20 11:36:13.258164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:62320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.044 [2024-11-20 11:36:13.258181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:16.044 [2024-11-20 11:36:13.258205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:62328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.044 [2024-11-20 11:36:13.258222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:16.044 [2024-11-20 11:36:13.258244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:62336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.044 [2024-11-20 11:36:13.258260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:16.044 [2024-11-20 11:36:13.258283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:62344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.044 [2024-11-20 11:36:13.258299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:16.044 [2024-11-20 11:36:13.258322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:62352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.044 [2024-11-20 11:36:13.258338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:16.044 [2024-11-20 11:36:13.258360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:62360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.044 [2024-11-20 11:36:13.258376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:16.044 [2024-11-20 11:36:13.258399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:62368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.044 [2024-11-20 11:36:13.258415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:16.044 [2024-11-20 11:36:13.258437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:62376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.044 [2024-11-20 11:36:13.258454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:16.044 [2024-11-20 11:36:13.258505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:62384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.044 [2024-11-20 11:36:13.258523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:16.044 [2024-11-20 11:36:13.258546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:62392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.044 [2024-11-20 11:36:13.258563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:16.044 [2024-11-20 11:36:13.258586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:62400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.044 [2024-11-20 11:36:13.258602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:16.044 [2024-11-20 11:36:13.258642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:62408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.044 [2024-11-20 11:36:13.258661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:16.044 [2024-11-20 11:36:13.258684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:62416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.044 [2024-11-20 11:36:13.258700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:16.044 [2024-11-20 11:36:13.258722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:62424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.044 [2024-11-20 11:36:13.258738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:16.044 [2024-11-20 11:36:13.258761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:62432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.044 [2024-11-20 11:36:13.258777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:16.044 [2024-11-20 11:36:13.258812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:62440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.044 [2024-11-20 11:36:13.258828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:16.044 [2024-11-20 11:36:13.258851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:62448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.044 [2024-11-20 11:36:13.258867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:16.044 [2024-11-20 11:36:13.258890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:62456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.044 [2024-11-20 11:36:13.258906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:16.044 [2024-11-20 11:36:13.258928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:62464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.044 [2024-11-20 11:36:13.258944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:16.044 [2024-11-20 11:36:13.258967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:62472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.044 [2024-11-20 11:36:13.258983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:16.044 [2024-11-20 11:36:13.259018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:62480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.044 [2024-11-20 11:36:13.259035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:16.044 [2024-11-20 11:36:13.259058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:62488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.044 [2024-11-20 11:36:13.259075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:16.044 [2024-11-20 11:36:13.259097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:62496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.044 [2024-11-20 11:36:13.259114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:16.044 [2024-11-20 11:36:13.259137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:62504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.044 [2024-11-20 11:36:13.259153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:16.044 [2024-11-20 11:36:13.259176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:62512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.044 [2024-11-20 11:36:13.259192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:16.044 [2024-11-20 11:36:13.259214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:62520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.044 [2024-11-20 11:36:13.259231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:16.044 [2024-11-20 11:36:13.259253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:62528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.044 [2024-11-20 11:36:13.259269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:16.044 [2024-11-20 11:36:13.259292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:62536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.044 [2024-11-20 11:36:13.259313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.044 [2024-11-20 11:36:13.259336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:62544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.044 [2024-11-20 11:36:13.259353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.045 [2024-11-20 11:36:13.259376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:62552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.045 [2024-11-20 11:36:13.259392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:16.045 [2024-11-20 11:36:13.259414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:62560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.045 [2024-11-20 11:36:13.259431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:16.045 [2024-11-20 11:36:13.259454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:62568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.045 [2024-11-20 11:36:13.259470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:16.045 [2024-11-20 11:36:13.259493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:62576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.045 [2024-11-20 11:36:13.259517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:16.045 [2024-11-20 11:36:13.259541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:62584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.045 [2024-11-20 11:36:13.259557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:16.045 [2024-11-20 11:36:13.259580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:62592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.045 [2024-11-20 11:36:13.259596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:16.045 [2024-11-20 11:36:13.259631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:62600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.045 [2024-11-20 11:36:13.259650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:16.045 [2024-11-20 11:36:13.259673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:62608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.045 [2024-11-20 11:36:13.259690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:16.045 [2024-11-20 11:36:13.259713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:62616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.045 [2024-11-20 11:36:13.259729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:16.045 [2024-11-20 11:36:13.259752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:62624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.045 [2024-11-20 11:36:13.259768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:16.045 [2024-11-20 11:36:13.259790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:62632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.045 [2024-11-20 11:36:13.259806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:16.045 [2024-11-20 11:36:13.259829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:62640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.045 [2024-11-20 11:36:13.259845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:16.045 [2024-11-20 11:36:13.259868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:62648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.045 [2024-11-20 11:36:13.259884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:16.045 [2024-11-20 11:36:13.259906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:62656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.045 [2024-11-20 11:36:13.259922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:16.045 [2024-11-20 11:36:13.259945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:62664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.045 [2024-11-20 11:36:13.259961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:16.045 [2024-11-20 11:36:13.259983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:62672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.045 [2024-11-20 11:36:13.260009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:16.045 [2024-11-20 11:36:13.260034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:62680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.045 [2024-11-20 11:36:13.260051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:16.045 [2024-11-20 11:36:13.260834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:62688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.045 [2024-11-20 11:36:13.260865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:16.045 [2024-11-20 11:36:13.260896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:62696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.045 [2024-11-20 11:36:13.260915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:16.045 [2024-11-20 11:36:13.260939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:62056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.045 [2024-11-20 11:36:13.260955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:16.045 [2024-11-20 11:36:13.260978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:62064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.045 [2024-11-20 11:36:13.260994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:16.045 [2024-11-20 11:36:13.261017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:62072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.045 [2024-11-20 11:36:13.261034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:16.045 [2024-11-20 11:36:13.261056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:62080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.045 [2024-11-20 11:36:13.261073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:16.045 [2024-11-20 11:36:13.261095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:62088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.045 [2024-11-20 11:36:13.261111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:16.045 [2024-11-20 11:36:13.261135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:62096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.045 [2024-11-20 11:36:13.261151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:16.045 [2024-11-20 11:36:13.261174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:62104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.045 [2024-11-20 11:36:13.261190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:16.045 [2024-11-20 11:36:13.261213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:62112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.045 [2024-11-20 11:36:13.261229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:16.045 [2024-11-20 11:36:13.261252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:62120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.045 [2024-11-20 11:36:13.261268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:16.045 [2024-11-20 11:36:13.261304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:62128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.045 [2024-11-20 11:36:13.261321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:16.045 [2024-11-20 11:36:13.261344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:62136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.045 [2024-11-20 11:36:13.261361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:16.045 [2024-11-20 11:36:13.261383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:62144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.045 [2024-11-20 11:36:13.261401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:16.045 [2024-11-20 11:36:13.261424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:62152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.045 [2024-11-20 11:36:13.261440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.045 [2024-11-20 11:36:13.261463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:62160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.045 [2024-11-20 11:36:13.261480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:16.045 [2024-11-20 11:36:13.261503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:62168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.046 [2024-11-20 11:36:13.261520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:16.046 [2024-11-20 11:36:13.261543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:62176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.046 [2024-11-20 11:36:13.261560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:16.046 [2024-11-20 11:36:13.261583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:62184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.046 [2024-11-20 11:36:13.261600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:16.046 [2024-11-20 11:36:13.261638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:62192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.046 [2024-11-20 11:36:13.261658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:16.046 [2024-11-20 11:36:13.261682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:62200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.046 [2024-11-20 11:36:13.261698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:16.046 [2024-11-20 11:36:13.261721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:62208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.046 [2024-11-20 11:36:13.261738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:16.046 [2024-11-20 11:36:13.261771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:62216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.046 [2024-11-20 11:36:13.261793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:16.046 [2024-11-20 11:36:13.261826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:62224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.046 [2024-11-20 11:36:13.261844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:16.046 [2024-11-20 11:36:13.261867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:62232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.046 [2024-11-20 11:36:13.261884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:16.046 [2024-11-20 11:36:13.261907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:62240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.046 [2024-11-20 11:36:13.261924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:16.046 [2024-11-20 11:36:13.261954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:62248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.046 [2024-11-20 11:36:13.261971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:16.046 [2024-11-20 11:36:13.261994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:62256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.046 [2024-11-20 11:36:13.262010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:16.046 [2024-11-20 11:36:13.262033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:62264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.046 [2024-11-20 11:36:13.262049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:16.046 [2024-11-20 11:36:13.262072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:62272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.046 [2024-11-20 11:36:13.262103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:16.046 [2024-11-20 11:36:13.262129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:62280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.046 [2024-11-20 11:36:13.262146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:16.046 [2024-11-20 11:36:13.262169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:62288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.046 [2024-11-20 11:36:13.262186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:16.046 [2024-11-20 11:36:13.262209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:62296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.046 [2024-11-20 11:36:13.262225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:16.046 [2024-11-20 11:36:13.262249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:62304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.046 [2024-11-20 11:36:13.262265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:16.046 [2024-11-20 11:36:13.262288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:62704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.046 [2024-11-20 11:36:13.262304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:16.046 [2024-11-20 11:36:13.262327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:62712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.046 [2024-11-20 11:36:13.262352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:16.046 [2024-11-20 11:36:13.262376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:62720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.046 [2024-11-20 11:36:13.262393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:16.046 [2024-11-20 11:36:13.262416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:62728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.046 [2024-11-20 11:36:13.262432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:16.046 [2024-11-20 11:36:13.262455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:62736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.046 [2024-11-20 11:36:13.262472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:16.046 [2024-11-20 11:36:13.262496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:62744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.046 [2024-11-20 11:36:13.262512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:16.046 [2024-11-20 11:36:13.262535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:62752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.046 [2024-11-20 11:36:13.262551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:16.046 [2024-11-20 11:36:13.262575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:62760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.046 [2024-11-20 11:36:13.262592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:16.046 [2024-11-20 11:36:13.262627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:62768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.046 [2024-11-20 11:36:13.262647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:16.046 [2024-11-20 11:36:13.262670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:62776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.046 [2024-11-20 11:36:13.262687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:16.046 [2024-11-20 11:36:13.262710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:62784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.046 [2024-11-20 11:36:13.262726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:16.046 [2024-11-20 11:36:13.262749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:62792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.046 [2024-11-20 11:36:13.262770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:16.046 [2024-11-20 11:36:13.262795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:62800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.046 [2024-11-20 11:36:13.262812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.046 [2024-11-20 11:36:13.262834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:62808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.046 [2024-11-20 11:36:13.262859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:16.046 [2024-11-20 11:36:13.262883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:62816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.046 [2024-11-20 11:36:13.262900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:16.046 [2024-11-20 11:36:13.262929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:62824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.046 [2024-11-20 11:36:13.262947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:16.046 [2024-11-20 11:36:13.262970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:62832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.047 [2024-11-20 11:36:13.262986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:16.047 [2024-11-20 11:36:13.263010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:62840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.047 [2024-11-20 11:36:13.263026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:16.047 [2024-11-20 11:36:13.263049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:62848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.047 [2024-11-20 11:36:13.263065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:16.047 [2024-11-20 11:36:13.263088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:62856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.047 [2024-11-20 11:36:13.263104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:16.047 [2024-11-20 11:36:13.263127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:62864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.047 [2024-11-20 11:36:13.263144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:16.047 [2024-11-20 11:36:13.263166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:62872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.047 [2024-11-20 11:36:13.263182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:16.047 [2024-11-20 11:36:13.263205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.047 [2024-11-20 11:36:13.263221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:16.047 [2024-11-20 11:36:13.263244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:62888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.047 [2024-11-20 11:36:13.263260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:16.047 [2024-11-20 11:36:13.263283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:62896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.047 [2024-11-20 11:36:13.263300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:16.047 [2024-11-20 11:36:13.263322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:62904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.047 [2024-11-20 11:36:13.263339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:16.047 [2024-11-20 11:36:13.263369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:62912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.047 [2024-11-20 11:36:13.263386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:16.047 [2024-11-20 11:36:13.263409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:62920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.047 [2024-11-20 11:36:13.263428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:16.047 [2024-11-20 11:36:13.263452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:62928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.047 [2024-11-20 11:36:13.263468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:16.047 [2024-11-20 11:36:13.263491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:62936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.047 [2024-11-20 11:36:13.263507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:16.047 [2024-11-20 11:36:13.263530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:62944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.047 [2024-11-20 11:36:13.263546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:16.047 [2024-11-20 11:36:13.263570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:62952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.047 [2024-11-20 11:36:13.263587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:16.047 [2024-11-20 11:36:13.263611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:62960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.047 [2024-11-20 11:36:13.263641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:16.047 [2024-11-20 11:36:13.265446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:62968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.047 [2024-11-20 11:36:13.265479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:16.047 [2024-11-20 11:36:13.265509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:62976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.047 [2024-11-20 11:36:13.265528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:16.047 [2024-11-20 11:36:13.265551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:62984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.047 [2024-11-20 11:36:13.265568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:16.047 [2024-11-20 11:36:13.265591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:62992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.047 [2024-11-20 11:36:13.265607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:16.047 [2024-11-20 11:36:13.265649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:63000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.047 [2024-11-20 11:36:13.265668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:16.047 [2024-11-20 11:36:13.265708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:63008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.047 [2024-11-20 11:36:13.265726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:16.047 [2024-11-20 11:36:13.265749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:63016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.047 [2024-11-20 11:36:13.265778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:16.047 [2024-11-20 11:36:13.265803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:63024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.047 [2024-11-20 11:36:13.265820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:16.047 [2024-11-20 11:36:13.265844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:63032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.047 [2024-11-20 11:36:13.265860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:16.047 [2024-11-20 11:36:13.265883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:63040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.047 [2024-11-20 11:36:13.265899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:16.047 [2024-11-20 11:36:13.265922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:63048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.047 [2024-11-20 11:36:13.265942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:16.047 [2024-11-20 11:36:13.265965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:63056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.047 [2024-11-20 11:36:13.265982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.047 [2024-11-20 11:36:13.266006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:63064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.047 [2024-11-20 11:36:13.266022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:16.047 8626.56 IOPS, 33.70 MiB/s [2024-11-20T11:37:01.636Z] 8594.40 IOPS, 33.57 MiB/s [2024-11-20T11:37:01.636Z] 8592.36 IOPS, 33.56 MiB/s [2024-11-20T11:37:01.636Z] 8576.08 IOPS, 33.50 MiB/s [2024-11-20T11:37:01.636Z] 8573.62 IOPS, 33.49 MiB/s [2024-11-20T11:37:01.636Z] 8565.50 IOPS, 33.46 MiB/s [2024-11-20T11:37:01.636Z] 8565.27 IOPS, 33.46 MiB/s [2024-11-20T11:37:01.636Z] [2024-11-20 11:36:19.945869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:121912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.047 [2024-11-20 11:36:19.945957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:16.047 [2024-11-20 11:36:19.946997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:121920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.047 [2024-11-20 11:36:19.947040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:16.047 [2024-11-20 11:36:19.947075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:121928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.047 [2024-11-20 11:36:19.947095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:16.048 [2024-11-20 11:36:19.947120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:121936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.048 [2024-11-20 11:36:19.947164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:16.048 [2024-11-20 11:36:19.947191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:121944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.048 [2024-11-20 11:36:19.947209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:16.048 [2024-11-20 11:36:19.947234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:121952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.048 [2024-11-20 11:36:19.947251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:16.048 [2024-11-20 11:36:19.947276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:121960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.048 [2024-11-20 11:36:19.947293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:16.048 [2024-11-20 11:36:19.947317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:121968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.048 [2024-11-20 11:36:19.947334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:16.048 [2024-11-20 11:36:19.947358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:121976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.048 [2024-11-20 11:36:19.947374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:16.048 [2024-11-20 11:36:19.947398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:121984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.048 [2024-11-20 11:36:19.947415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:16.048 [2024-11-20 11:36:19.947440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:121992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.048 [2024-11-20 11:36:19.947456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:16.048 [2024-11-20 11:36:19.947481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:122000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.048 [2024-11-20 11:36:19.947498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:16.048 [2024-11-20 11:36:19.947522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:122008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.048 [2024-11-20 11:36:19.947539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:16.048 [2024-11-20 11:36:19.947564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:122016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.048 [2024-11-20 11:36:19.947581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:16.048 [2024-11-20 11:36:19.947626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:122024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.048 [2024-11-20 11:36:19.947647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:16.048 [2024-11-20 11:36:19.947673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:122032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.048 [2024-11-20 11:36:19.947690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.048 [2024-11-20 11:36:19.947724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:122040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.048 [2024-11-20 11:36:19.947742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:16.048 [2024-11-20 11:36:19.947766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:122048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.048 [2024-11-20 11:36:19.947783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:16.048 [2024-11-20 11:36:19.947808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:122056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.048 [2024-11-20 11:36:19.947825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:16.048 [2024-11-20 11:36:19.947849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:122064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.048 [2024-11-20 11:36:19.947866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:16.048 [2024-11-20 11:36:19.947890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:122072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.048 [2024-11-20 11:36:19.947907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:16.048 [2024-11-20 11:36:19.947932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:122080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.048 [2024-11-20 11:36:19.947948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:16.048 [2024-11-20 11:36:19.947972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:122088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.048 [2024-11-20 11:36:19.947988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:16.048 [2024-11-20 11:36:19.948012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:122096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.048 [2024-11-20 11:36:19.948029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:16.048 [2024-11-20 11:36:19.948053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:122104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.048 [2024-11-20 11:36:19.948069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:16.048 [2024-11-20 11:36:19.948093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:122112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.048 [2024-11-20 11:36:19.948110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:16.048 [2024-11-20 11:36:19.948134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:122120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.048 [2024-11-20 11:36:19.948151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:16.048 [2024-11-20 11:36:19.948175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:122128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.048 [2024-11-20 11:36:19.948192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:16.048 [2024-11-20 11:36:19.948223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:122136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.048 [2024-11-20 11:36:19.948243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:16.048 [2024-11-20 11:36:19.948267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:122144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.048 [2024-11-20 11:36:19.948284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:16.048 [2024-11-20 11:36:19.948308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:122152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.048 [2024-11-20 11:36:19.948324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:16.048 [2024-11-20 11:36:19.948348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:122160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.048 [2024-11-20 11:36:19.948365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:16.048 [2024-11-20 11:36:19.948389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:122168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.048 [2024-11-20 11:36:19.948408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:16.048 [2024-11-20 11:36:19.948432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:122176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.048 [2024-11-20 11:36:19.948449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:16.048 [2024-11-20 11:36:19.948473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:122184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.048 [2024-11-20 11:36:19.948490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:16.048 [2024-11-20 11:36:19.948514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:122192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.048 [2024-11-20 11:36:19.948530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:16.049 [2024-11-20 11:36:19.948555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:122200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.049 [2024-11-20 11:36:19.948571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:16.049 [2024-11-20 11:36:19.948595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:122208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.049 [2024-11-20 11:36:19.948623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:16.049 [2024-11-20 11:36:19.948651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:122216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.049 [2024-11-20 11:36:19.948668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:16.049 [2024-11-20 11:36:19.948704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:122224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.049 [2024-11-20 11:36:19.948720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:16.049 [2024-11-20 11:36:19.948763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:122232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.049 [2024-11-20 11:36:19.948782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:16.049 [2024-11-20 11:36:19.948806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:122240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.049 [2024-11-20 11:36:19.948823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:16.049 [2024-11-20 11:36:19.948846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:122248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.049 [2024-11-20 11:36:19.948863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:16.049 [2024-11-20 11:36:19.948887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:122256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.049 [2024-11-20 11:36:19.948903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:16.049 [2024-11-20 11:36:19.948928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:122264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.049 [2024-11-20 11:36:19.948957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:16.049 [2024-11-20 11:36:19.948992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:122272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.049 [2024-11-20 11:36:19.949010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:16.049 [2024-11-20 11:36:19.949034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:122280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.049 [2024-11-20 11:36:19.949051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:16.049 [2024-11-20 11:36:19.949075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:122288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.049 [2024-11-20 11:36:19.949092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.049 [2024-11-20 11:36:19.949116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:122296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.049 [2024-11-20 11:36:19.949133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:16.049 [2024-11-20 11:36:19.949157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:122304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.049 [2024-11-20 11:36:19.949173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:16.049 [2024-11-20 11:36:19.949197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:122312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.049 [2024-11-20 11:36:19.949214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:16.049 [2024-11-20 11:36:19.949238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:122320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.049 [2024-11-20 11:36:19.949255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:16.049 [2024-11-20 11:36:19.949440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:122328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.049 [2024-11-20 11:36:19.949486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:16.049 [2024-11-20 11:36:19.949521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:122336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.049 [2024-11-20 11:36:19.949540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:16.049 [2024-11-20 11:36:19.949567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:122344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.049 [2024-11-20 11:36:19.949585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:16.049 [2024-11-20 11:36:19.949627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:122352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.049 [2024-11-20 11:36:19.949648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:16.049 [2024-11-20 11:36:19.949676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:122360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.049 [2024-11-20 11:36:19.949693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:16.049 [2024-11-20 11:36:19.949721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:122368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.049 [2024-11-20 11:36:19.949738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:16.049 [2024-11-20 11:36:19.949778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:122376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.049 [2024-11-20 11:36:19.949808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:16.049 [2024-11-20 11:36:19.949836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:122384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.049 [2024-11-20 11:36:19.949853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:16.049 [2024-11-20 11:36:19.949881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:122392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.049 [2024-11-20 11:36:19.949898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:16.049 [2024-11-20 11:36:19.949925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:122400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.049 [2024-11-20 11:36:19.949942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:16.049 [2024-11-20 11:36:19.949969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:122408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.049 [2024-11-20 11:36:19.949986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:16.049 [2024-11-20 11:36:19.950014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:122416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.049 [2024-11-20 11:36:19.950031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:16.049 [2024-11-20 11:36:19.950059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:122424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.049 [2024-11-20 11:36:19.950092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:16.050 [2024-11-20 11:36:19.950123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:122432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.050 [2024-11-20 11:36:19.950140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:16.050 [2024-11-20 11:36:19.950168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:122440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.050 [2024-11-20 11:36:19.950186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:16.050 [2024-11-20 11:36:19.950213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:122448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.050 [2024-11-20 11:36:19.950231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:16.050 [2024-11-20 11:36:19.950258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:122456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.050 [2024-11-20 11:36:19.950275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:16.050 [2024-11-20 11:36:19.950302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:122464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.050 [2024-11-20 11:36:19.950319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:16.050 [2024-11-20 11:36:19.950347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:122472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.050 [2024-11-20 11:36:19.950364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:16.050 [2024-11-20 11:36:19.950392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:122480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.050 [2024-11-20 11:36:19.950409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:16.050 [2024-11-20 11:36:19.950436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:122488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.050 [2024-11-20 11:36:19.950453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:16.050 [2024-11-20 11:36:19.950481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:122496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.050 [2024-11-20 11:36:19.950497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:16.050 [2024-11-20 11:36:19.950524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:122504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.050 [2024-11-20 11:36:19.950542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:16.050 [2024-11-20 11:36:19.950571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:122512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.050 [2024-11-20 11:36:19.950588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:16.050 [2024-11-20 11:36:19.950628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:122520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.050 [2024-11-20 11:36:19.950649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:16.050 [2024-11-20 11:36:19.950692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:122528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.050 [2024-11-20 11:36:19.950711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:16.050 [2024-11-20 11:36:19.950738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:122536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.050 [2024-11-20 11:36:19.950756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:16.050 [2024-11-20 11:36:19.950783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:122544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.050 [2024-11-20 11:36:19.950800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.050 [2024-11-20 11:36:19.950828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:121680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.050 [2024-11-20 11:36:19.950845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:16.050 [2024-11-20 11:36:19.950874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:121688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.050 [2024-11-20 11:36:19.950890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:16.050 [2024-11-20 11:36:19.950918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:121696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.050 [2024-11-20 11:36:19.950935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:16.050 [2024-11-20 11:36:19.950973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:121704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.050 [2024-11-20 11:36:19.950990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:16.050 [2024-11-20 11:36:19.951018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:121712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.050 [2024-11-20 11:36:19.951035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:16.050 [2024-11-20 11:36:19.951063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:121720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.050 [2024-11-20 11:36:19.951080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:16.050 [2024-11-20 11:36:19.951107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:121728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.050 [2024-11-20 11:36:19.951124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:16.050 [2024-11-20 11:36:19.951153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:121736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.050 [2024-11-20 11:36:19.951170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:16.050 [2024-11-20 11:36:19.951198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:121744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.050 [2024-11-20 11:36:19.951214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:16.050 [2024-11-20 11:36:19.951250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:121752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.050 [2024-11-20 11:36:19.951268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:16.050 [2024-11-20 11:36:19.951296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:121760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.050 [2024-11-20 11:36:19.951312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:16.050 [2024-11-20 11:36:19.951341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:121768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.050 [2024-11-20 11:36:19.951358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:16.050 [2024-11-20 11:36:19.951386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:121776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.050 [2024-11-20 11:36:19.951403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:16.050 [2024-11-20 11:36:19.951431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:121784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.050 [2024-11-20 11:36:19.951447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:16.050 [2024-11-20 11:36:19.951475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:122552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.050 [2024-11-20 11:36:19.951492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:16.050 [2024-11-20 11:36:19.951519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:121792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.050 [2024-11-20 11:36:19.951536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:16.050 [2024-11-20 11:36:19.951563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:121800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.050 [2024-11-20 11:36:19.951591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:16.050 [2024-11-20 11:36:19.951629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:121808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.050 [2024-11-20 11:36:19.951649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:16.050 [2024-11-20 11:36:19.951677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:121816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.051 [2024-11-20 11:36:19.951694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:16.051 [2024-11-20 11:36:19.951722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:121824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.051 [2024-11-20 11:36:19.951739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:16.051 [2024-11-20 11:36:19.951766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:121832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.051 [2024-11-20 11:36:19.951783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:16.051 [2024-11-20 11:36:19.951811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:121840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.051 [2024-11-20 11:36:19.951836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:16.051 [2024-11-20 11:36:19.951864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:121848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.051 [2024-11-20 11:36:19.951882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:16.051 [2024-11-20 11:36:19.951910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:121856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.051 [2024-11-20 11:36:19.951927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:16.051 [2024-11-20 11:36:19.951955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:121864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.051 [2024-11-20 11:36:19.951972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:16.051 [2024-11-20 11:36:19.952000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:121872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.051 [2024-11-20 11:36:19.952017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:16.051 [2024-11-20 11:36:19.952044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:121880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.051 [2024-11-20 11:36:19.952062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:16.051 [2024-11-20 11:36:19.952090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:121888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.051 [2024-11-20 11:36:19.952107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:16.051 [2024-11-20 11:36:19.952134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:121896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.051 [2024-11-20 11:36:19.952151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:16.051 [2024-11-20 11:36:19.952179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:121904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.051 [2024-11-20 11:36:19.952196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:16.051 8118.62 IOPS, 31.71 MiB/s [2024-11-20T11:37:01.640Z] 8032.06 IOPS, 31.38 MiB/s [2024-11-20T11:37:01.640Z] 8044.33 IOPS, 31.42 MiB/s [2024-11-20T11:37:01.640Z] 8071.42 IOPS, 31.53 MiB/s [2024-11-20T11:37:01.640Z] 8095.65 IOPS, 31.62 MiB/s [2024-11-20T11:37:01.640Z] 8119.14 IOPS, 31.72 MiB/s [2024-11-20T11:37:01.640Z] 8125.68 IOPS, 31.74 MiB/s [2024-11-20T11:37:01.640Z] [2024-11-20 11:36:27.251436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:20704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.051 [2024-11-20 11:36:27.251508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:16.051 [2024-11-20 11:36:27.251570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:20712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.051 [2024-11-20 11:36:27.251593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:16.051 [2024-11-20 11:36:27.251635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:20720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.051 [2024-11-20 11:36:27.251680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:16.051 [2024-11-20 11:36:27.251707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:20728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.051 [2024-11-20 11:36:27.251724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:16.051 [2024-11-20 11:36:27.251747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:20736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.051 [2024-11-20 11:36:27.251763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:16.051 [2024-11-20 11:36:27.251786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:20744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.051 [2024-11-20 11:36:27.251802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:16.051 [2024-11-20 11:36:27.251825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:20752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.051 [2024-11-20 11:36:27.251840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:16.051 [2024-11-20 11:36:27.251863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.051 [2024-11-20 11:36:27.251885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:16.051 [2024-11-20 11:36:27.251908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:20768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.051 [2024-11-20 11:36:27.251924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:16.051 [2024-11-20 11:36:27.251946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:20776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.051 [2024-11-20 11:36:27.251962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:16.051 [2024-11-20 11:36:27.251985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:20784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.051 [2024-11-20 11:36:27.252001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:16.051 [2024-11-20 11:36:27.252024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:20792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.051 [2024-11-20 11:36:27.252040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:16.051 [2024-11-20 11:36:27.252061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:20800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.051 [2024-11-20 11:36:27.252078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:16.051 [2024-11-20 11:36:27.252100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:20808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.051 [2024-11-20 11:36:27.252116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:16.051 [2024-11-20 11:36:27.252139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:20816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.051 [2024-11-20 11:36:27.252156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:16.051 [2024-11-20 11:36:27.252189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:20824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.051 [2024-11-20 11:36:27.252208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:16.051 [2024-11-20 11:36:27.252231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:20832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.051 [2024-11-20 11:36:27.252248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:16.051 [2024-11-20 11:36:27.252270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:20840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.051 [2024-11-20 11:36:27.252287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:16.051 [2024-11-20 11:36:27.252310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:20848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.051 [2024-11-20 11:36:27.252327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:16.051 [2024-11-20 11:36:27.252350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:20856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.051 [2024-11-20 11:36:27.252366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:16.051 [2024-11-20 11:36:27.252389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:20864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.051 [2024-11-20 11:36:27.252406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:16.051 [2024-11-20 11:36:27.252429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:20872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.052 [2024-11-20 11:36:27.252445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:16.052 [2024-11-20 11:36:27.252468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:20880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.052 [2024-11-20 11:36:27.252485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:16.052 [2024-11-20 11:36:27.252508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:20888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.052 [2024-11-20 11:36:27.252524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:16.052 [2024-11-20 11:36:27.252547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:20896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.052 [2024-11-20 11:36:27.252563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:16.052 [2024-11-20 11:36:27.252587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:20904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.052 [2024-11-20 11:36:27.252603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:16.052 [2024-11-20 11:36:27.252820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:20912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.052 [2024-11-20 11:36:27.252848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:16.052 [2024-11-20 11:36:27.252889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:20920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.052 [2024-11-20 11:36:27.252909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:16.052 [2024-11-20 11:36:27.252934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:20928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.052 [2024-11-20 11:36:27.252951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:16.052 [2024-11-20 11:36:27.252976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:20936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.052 [2024-11-20 11:36:27.252993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:16.052 [2024-11-20 11:36:27.253017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:20944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.052 [2024-11-20 11:36:27.253033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:16.052 [2024-11-20 11:36:27.253058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:20952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.052 [2024-11-20 11:36:27.253076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.052 [2024-11-20 11:36:27.253101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:20960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.052 [2024-11-20 11:36:27.253117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:16.052 [2024-11-20 11:36:27.253141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:20968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.052 [2024-11-20 11:36:27.253158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:16.052 [2024-11-20 11:36:27.253182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:20976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.052 [2024-11-20 11:36:27.253199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:16.052 [2024-11-20 11:36:27.253223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.052 [2024-11-20 11:36:27.253240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:16.052 [2024-11-20 11:36:27.253264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:20992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.052 [2024-11-20 11:36:27.253281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:16.052 [2024-11-20 11:36:27.253305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:21000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.052 [2024-11-20 11:36:27.253322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:16.052 [2024-11-20 11:36:27.253346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:21008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.052 [2024-11-20 11:36:27.253362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:16.052 [2024-11-20 11:36:27.253387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:21016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.052 [2024-11-20 11:36:27.253425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:16.052 [2024-11-20 11:36:27.253453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.052 [2024-11-20 11:36:27.253470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:16.052 [2024-11-20 11:36:27.253495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:21032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.052 [2024-11-20 11:36:27.253515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:16.052 [2024-11-20 11:36:27.253540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:21040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.052 [2024-11-20 11:36:27.253557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:16.052 [2024-11-20 11:36:27.253581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.052 [2024-11-20 11:36:27.253598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:16.052 [2024-11-20 11:36:27.253638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:21056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.052 [2024-11-20 11:36:27.253658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:16.052 [2024-11-20 11:36:27.253683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.052 [2024-11-20 11:36:27.253700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:16.052 [2024-11-20 11:36:27.253725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:21072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.052 [2024-11-20 11:36:27.253741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:16.052 [2024-11-20 11:36:27.253776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.052 [2024-11-20 11:36:27.253795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:16.052 [2024-11-20 11:36:27.253821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:21088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.052 [2024-11-20 11:36:27.253837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:16.052 [2024-11-20 11:36:27.253862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:20384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.052 [2024-11-20 11:36:27.253879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:16.052 [2024-11-20 11:36:27.253903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:20392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.052 [2024-11-20 11:36:27.253920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:16.052 [2024-11-20 11:36:27.253945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:20400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.052 [2024-11-20 11:36:27.253970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:16.052 [2024-11-20 11:36:27.253996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:20408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.052 [2024-11-20 11:36:27.254013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:16.052 [2024-11-20 11:36:27.254038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:20416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.052 [2024-11-20 11:36:27.254054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:16.052 [2024-11-20 11:36:27.254079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.052 [2024-11-20 11:36:27.254096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:16.053 [2024-11-20 11:36:27.254120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:20432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.053 [2024-11-20 11:36:27.254136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:16.053 [2024-11-20 11:36:27.254161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.053 [2024-11-20 11:36:27.254177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:16.053 [2024-11-20 11:36:27.254203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.053 [2024-11-20 11:36:27.254220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:16.053 [2024-11-20 11:36:27.254244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:20456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.053 [2024-11-20 11:36:27.254261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:16.053 [2024-11-20 11:36:27.254286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.053 [2024-11-20 11:36:27.254302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:16.053 [2024-11-20 11:36:27.254327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:20472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.053 [2024-11-20 11:36:27.254343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:16.053 [2024-11-20 11:36:27.254368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.053 [2024-11-20 11:36:27.254384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:16.053 [2024-11-20 11:36:27.254408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.053 [2024-11-20 11:36:27.254425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:16.053 [2024-11-20 11:36:27.254449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:20496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.053 [2024-11-20 11:36:27.254466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.053 [2024-11-20 11:36:27.254502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:20504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.053 [2024-11-20 11:36:27.254519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:16.053 [2024-11-20 11:36:27.254545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:20512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.053 [2024-11-20 11:36:27.254561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:16.053 [2024-11-20 11:36:27.254586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.053 [2024-11-20 11:36:27.254602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:16.053 [2024-11-20 11:36:27.254638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:20528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.053 [2024-11-20 11:36:27.254657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:16.053 [2024-11-20 11:36:27.254682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.053 [2024-11-20 11:36:27.254698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:16.053 [2024-11-20 11:36:27.254723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.053 [2024-11-20 11:36:27.254739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:16.053 [2024-11-20 11:36:27.254764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:20552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.053 [2024-11-20 11:36:27.254781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:16.053 [2024-11-20 11:36:27.254805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:20560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.053 [2024-11-20 11:36:27.254821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:16.053 [2024-11-20 11:36:27.254845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.053 [2024-11-20 11:36:27.254862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:16.053 [2024-11-20 11:36:27.254887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:20576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.053 [2024-11-20 11:36:27.254903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:16.053 [2024-11-20 11:36:27.254928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.053 [2024-11-20 11:36:27.254944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:16.053 [2024-11-20 11:36:27.254968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:20592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.053 [2024-11-20 11:36:27.254985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:16.053 [2024-11-20 11:36:27.255017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:20600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.053 [2024-11-20 11:36:27.255035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:16.053 [2024-11-20 11:36:27.255060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:20608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.053 [2024-11-20 11:36:27.255077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:16.053 [2024-11-20 11:36:27.255102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.053 [2024-11-20 11:36:27.255119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:16.053 [2024-11-20 11:36:27.255144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.053 [2024-11-20 11:36:27.255161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:16.053 [2024-11-20 11:36:27.255185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.053 [2024-11-20 11:36:27.255201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:16.053 [2024-11-20 11:36:27.256330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:20640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.053 [2024-11-20 11:36:27.256356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:16.053 [2024-11-20 11:36:27.256385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:20648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.053 [2024-11-20 11:36:27.256402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:16.053 [2024-11-20 11:36:27.256428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.053 [2024-11-20 11:36:27.256445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:16.053 [2024-11-20 11:36:27.256472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:20664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.053 [2024-11-20 11:36:27.256488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:16.053 [2024-11-20 11:36:27.256515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:20672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.053 [2024-11-20 11:36:27.256532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:16.053 [2024-11-20 11:36:27.256558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.053 [2024-11-20 11:36:27.256575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:16.053 [2024-11-20 11:36:27.256602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:20688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.053 [2024-11-20 11:36:27.256633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:16.053 [2024-11-20 11:36:27.256889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:20696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.053 [2024-11-20 11:36:27.256929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:16.054 [2024-11-20 11:36:27.256965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:21096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.054 [2024-11-20 11:36:27.256984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:16.054 [2024-11-20 11:36:27.257016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:21104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.054 [2024-11-20 11:36:27.257033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:16.054 [2024-11-20 11:36:27.257065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:21112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.054 [2024-11-20 11:36:27.257082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:16.054 [2024-11-20 11:36:27.257112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:21120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.054 [2024-11-20 11:36:27.257129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:16.054 [2024-11-20 11:36:27.257159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:21128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.054 [2024-11-20 11:36:27.257176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:16.054 [2024-11-20 11:36:27.257206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:21136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.054 [2024-11-20 11:36:27.257223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:16.054 [2024-11-20 11:36:27.257253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:21144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.054 [2024-11-20 11:36:27.257270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.054 [2024-11-20 11:36:27.257300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:21152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.054 [2024-11-20 11:36:27.257317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:16.054 7946.61 IOPS, 31.04 MiB/s [2024-11-20T11:37:01.643Z] 7615.50 IOPS, 29.75 MiB/s [2024-11-20T11:37:01.643Z] 7310.88 IOPS, 28.56 MiB/s [2024-11-20T11:37:01.643Z] 7029.69 IOPS, 27.46 MiB/s [2024-11-20T11:37:01.643Z] 6769.33 IOPS, 26.44 MiB/s [2024-11-20T11:37:01.643Z] 6527.57 IOPS, 25.50 MiB/s [2024-11-20T11:37:01.643Z] 6302.48 IOPS, 24.62 MiB/s [2024-11-20T11:37:01.643Z] 6239.93 IOPS, 24.37 MiB/s [2024-11-20T11:37:01.643Z] 6308.77 IOPS, 24.64 MiB/s [2024-11-20T11:37:01.643Z] 6376.78 IOPS, 24.91 MiB/s [2024-11-20T11:37:01.643Z] 6407.03 IOPS, 25.03 MiB/s [2024-11-20T11:37:01.643Z] 6454.94 IOPS, 25.21 MiB/s [2024-11-20T11:37:01.643Z] 6485.49 IOPS, 25.33 MiB/s [2024-11-20T11:37:01.643Z] 6522.69 IOPS, 25.48 MiB/s [2024-11-20T11:37:01.643Z] [2024-11-20 11:36:40.977215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:56872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.054 [2024-11-20 11:36:40.977285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:16.054 [2024-11-20 11:36:40.977345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:56240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.054 [2024-11-20 11:36:40.977369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:16.054 [2024-11-20 11:36:40.977423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:56248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.054 [2024-11-20 11:36:40.977442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:16.054 [2024-11-20 11:36:40.977465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:56256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.054 [2024-11-20 11:36:40.977482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:16.054 [2024-11-20 11:36:40.977504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:56264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.054 [2024-11-20 11:36:40.977520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:16.054 [2024-11-20 11:36:40.977543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:56272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.054 [2024-11-20 11:36:40.977558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:16.054 [2024-11-20 11:36:40.977581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:56280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.054 [2024-11-20 11:36:40.977597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:16.054 [2024-11-20 11:36:40.977641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:56288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.054 [2024-11-20 11:36:40.977669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:16.054 [2024-11-20 11:36:40.977698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:56296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.054 [2024-11-20 11:36:40.977719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:16.054 [2024-11-20 11:36:40.977748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:56304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.054 [2024-11-20 11:36:40.977782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:16.054 [2024-11-20 11:36:40.977815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:56312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.054 [2024-11-20 11:36:40.977837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:16.054 [2024-11-20 11:36:40.977866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:56320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.054 [2024-11-20 11:36:40.977887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:16.054 [2024-11-20 11:36:40.977917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:56328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.054 [2024-11-20 11:36:40.977939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:16.054 [2024-11-20 11:36:40.977968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:56336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.054 [2024-11-20 11:36:40.977989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:16.054 [2024-11-20 11:36:40.978238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:56344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.054 [2024-11-20 11:36:40.978290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:16.054 [2024-11-20 11:36:40.978332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:56352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.054 [2024-11-20 11:36:40.978356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:16.054 [2024-11-20 11:36:40.978386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:56880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.054 [2024-11-20 11:36:40.978408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.054 [2024-11-20 11:36:40.978438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:56888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.054 [2024-11-20 11:36:40.978461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.054 [2024-11-20 11:36:40.978490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:56896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.055 [2024-11-20 11:36:40.978511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:16.055 [2024-11-20 11:36:40.978540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:56904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.055 [2024-11-20 11:36:40.978562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:16.055 [2024-11-20 11:36:40.978591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:56912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.055 [2024-11-20 11:36:40.978629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:16.055 [2024-11-20 11:36:40.978663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:56920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.055 [2024-11-20 11:36:40.978685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:16.055 [2024-11-20 11:36:40.978725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:56928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.055 [2024-11-20 11:36:40.978748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:16.055 [2024-11-20 11:36:40.979126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:56936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.055 [2024-11-20 11:36:40.979161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.055 [2024-11-20 11:36:40.979189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:56944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.055 [2024-11-20 11:36:40.979211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.055 [2024-11-20 11:36:40.979244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:56952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.055 [2024-11-20 11:36:40.979264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.055 [2024-11-20 11:36:40.979286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:56960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.055 [2024-11-20 11:36:40.979306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.055 [2024-11-20 11:36:40.979343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:56968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.055 [2024-11-20 11:36:40.979364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.055 [2024-11-20 11:36:40.979386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:56976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.055 [2024-11-20 11:36:40.979407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.055 [2024-11-20 11:36:40.979430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:56984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.055 [2024-11-20 11:36:40.979450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.055 [2024-11-20 11:36:40.979473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:56992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.055 [2024-11-20 11:36:40.979493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.055 [2024-11-20 11:36:40.979515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:57000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.055 [2024-11-20 11:36:40.979535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.055 [2024-11-20 11:36:40.979558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:57008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.055 [2024-11-20 11:36:40.979578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.055 [2024-11-20 11:36:40.979601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:57016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.055 [2024-11-20 11:36:40.979639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.055 [2024-11-20 11:36:40.979664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:57024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.055 [2024-11-20 11:36:40.979684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.055 [2024-11-20 11:36:40.979709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:57032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.055 [2024-11-20 11:36:40.979733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.055 [2024-11-20 11:36:40.979750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:57040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.055 [2024-11-20 11:36:40.979765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.055 [2024-11-20 11:36:40.979781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:57048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.055 [2024-11-20 11:36:40.979795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.055 [2024-11-20 11:36:40.979811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:57056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.055 [2024-11-20 11:36:40.979826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.055 [2024-11-20 11:36:40.979842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:57064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.055 [2024-11-20 11:36:40.979866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.055 [2024-11-20 11:36:40.979883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:57072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.055 [2024-11-20 11:36:40.979898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.055 [2024-11-20 11:36:40.979914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:57080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.055 [2024-11-20 11:36:40.979928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.055 [2024-11-20 11:36:40.979944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:57088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.055 [2024-11-20 11:36:40.979959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.055 [2024-11-20 11:36:40.979975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:57096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.055 [2024-11-20 11:36:40.979989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.055 [2024-11-20 11:36:40.980005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:57104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.055 [2024-11-20 11:36:40.980019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.055 [2024-11-20 11:36:40.980038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:57112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.055 [2024-11-20 11:36:40.980061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.055 [2024-11-20 11:36:40.980083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:57120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.055 [2024-11-20 11:36:40.980103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.055 [2024-11-20 11:36:40.980125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:57128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.055 [2024-11-20 11:36:40.980145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.055 [2024-11-20 11:36:40.980167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:57136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.055 [2024-11-20 11:36:40.980187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.055 [2024-11-20 11:36:40.980210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:57144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.055 [2024-11-20 11:36:40.980230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.055 [2024-11-20 11:36:40.980252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:57152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.055 [2024-11-20 11:36:40.980272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.055 [2024-11-20 11:36:40.980294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:57160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.055 [2024-11-20 11:36:40.980313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.055 [2024-11-20 11:36:40.980345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:57168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.055 [2024-11-20 11:36:40.980366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.055 [2024-11-20 11:36:40.980389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:57176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.056 [2024-11-20 11:36:40.980410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.056 [2024-11-20 11:36:40.980431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:57184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.056 [2024-11-20 11:36:40.980451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.056 [2024-11-20 11:36:40.980473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:57192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.056 [2024-11-20 11:36:40.980493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.056 [2024-11-20 11:36:40.980515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:56360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.056 [2024-11-20 11:36:40.980535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.056 [2024-11-20 11:36:40.980558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:56368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.056 [2024-11-20 11:36:40.980577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.056 [2024-11-20 11:36:40.980599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:56376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.056 [2024-11-20 11:36:40.980636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.056 [2024-11-20 11:36:40.980660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:56384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.056 [2024-11-20 11:36:40.980680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.056 [2024-11-20 11:36:40.980702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:56392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.056 [2024-11-20 11:36:40.980722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.056 [2024-11-20 11:36:40.980744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:56400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.056 [2024-11-20 11:36:40.980765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.056 [2024-11-20 11:36:40.980787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:56408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.056 [2024-11-20 11:36:40.980807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.056 [2024-11-20 11:36:40.980828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:56416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.056 [2024-11-20 11:36:40.980848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.056 [2024-11-20 11:36:40.980870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:56424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.056 [2024-11-20 11:36:40.980899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.056 [2024-11-20 11:36:40.980923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:56432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.056 [2024-11-20 11:36:40.980944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.056 [2024-11-20 11:36:40.980966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:56440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.056 [2024-11-20 11:36:40.980986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.056 [2024-11-20 11:36:40.981008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:56448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.056 [2024-11-20 11:36:40.981028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.056 [2024-11-20 11:36:40.981050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:56456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.056 [2024-11-20 11:36:40.981069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.056 [2024-11-20 11:36:40.981091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:56464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.056 [2024-11-20 11:36:40.981113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.056 [2024-11-20 11:36:40.981135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:56472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.056 [2024-11-20 11:36:40.981155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.056 [2024-11-20 11:36:40.981178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:56480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.056 [2024-11-20 11:36:40.981205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.056 [2024-11-20 11:36:40.981229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:56488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.056 [2024-11-20 11:36:40.981250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.056 [2024-11-20 11:36:40.981272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:56496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.056 [2024-11-20 11:36:40.981292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.056 [2024-11-20 11:36:40.981315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:56504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.056 [2024-11-20 11:36:40.981335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.056 [2024-11-20 11:36:40.981358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:56512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.056 [2024-11-20 11:36:40.981379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.056 [2024-11-20 11:36:40.981402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:56520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.056 [2024-11-20 11:36:40.981422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.056 [2024-11-20 11:36:40.981445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:56528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.056 [2024-11-20 11:36:40.981474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.056 [2024-11-20 11:36:40.981499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:56536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.056 [2024-11-20 11:36:40.981525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.056 [2024-11-20 11:36:40.981553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:56544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.056 [2024-11-20 11:36:40.981578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.056 [2024-11-20 11:36:40.981607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:56552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.056 [2024-11-20 11:36:40.981651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.056 [2024-11-20 11:36:40.981682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:56560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.056 [2024-11-20 11:36:40.981709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.056 [2024-11-20 11:36:40.981737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:56568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.056 [2024-11-20 11:36:40.981764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.056 [2024-11-20 11:36:40.981809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:56576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.056 [2024-11-20 11:36:40.981836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.056 [2024-11-20 11:36:40.981866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:56584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.056 [2024-11-20 11:36:40.981892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.056 [2024-11-20 11:36:40.981914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:56592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.056 [2024-11-20 11:36:40.981929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.056 [2024-11-20 11:36:40.981946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:56600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.056 [2024-11-20 11:36:40.981960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.056 [2024-11-20 11:36:40.981976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:56608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.057 [2024-11-20 11:36:40.981994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.057 [2024-11-20 11:36:40.982010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:56616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.057 [2024-11-20 11:36:40.982025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.057 [2024-11-20 11:36:40.982041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:56624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.057 [2024-11-20 11:36:40.982055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.057 [2024-11-20 11:36:40.982081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:56632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.057 [2024-11-20 11:36:40.982096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.057 [2024-11-20 11:36:40.982112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:56640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.057 [2024-11-20 11:36:40.982128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.057 [2024-11-20 11:36:40.982144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:56648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.057 [2024-11-20 11:36:40.982158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.057 [2024-11-20 11:36:40.982174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:56656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.057 [2024-11-20 11:36:40.982189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.057 [2024-11-20 11:36:40.982205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:56664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.057 [2024-11-20 11:36:40.982219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.057 [2024-11-20 11:36:40.982235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:56672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.057 [2024-11-20 11:36:40.982250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.057 [2024-11-20 11:36:40.982266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:56680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.057 [2024-11-20 11:36:40.982280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.057 [2024-11-20 11:36:40.982296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:56688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.057 [2024-11-20 11:36:40.982310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.057 [2024-11-20 11:36:40.982326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:56696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.057 [2024-11-20 11:36:40.982340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.057 [2024-11-20 11:36:40.982356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:56704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.057 [2024-11-20 11:36:40.982371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.057 [2024-11-20 11:36:40.982386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:56712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.057 [2024-11-20 11:36:40.982401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.057 [2024-11-20 11:36:40.982417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:56720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.057 [2024-11-20 11:36:40.982431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.057 [2024-11-20 11:36:40.982447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:56728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.057 [2024-11-20 11:36:40.982468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.057 [2024-11-20 11:36:40.982484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:56736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.057 [2024-11-20 11:36:40.982503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.057 [2024-11-20 11:36:40.982520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:57200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.057 [2024-11-20 11:36:40.982534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.057 [2024-11-20 11:36:40.982550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:57208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.057 [2024-11-20 11:36:40.982564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.057 [2024-11-20 11:36:40.982580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:57216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.057 [2024-11-20 11:36:40.982595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.057 [2024-11-20 11:36:40.982611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:57224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.057 [2024-11-20 11:36:40.982645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.057 [2024-11-20 11:36:40.982663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:57232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.057 [2024-11-20 11:36:40.982678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.057 [2024-11-20 11:36:40.982694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:57240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.057 [2024-11-20 11:36:40.982709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.057 [2024-11-20 11:36:40.982725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.057 [2024-11-20 11:36:40.982739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.057 [2024-11-20 11:36:40.982755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:57256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.057 [2024-11-20 11:36:40.982770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.057 [2024-11-20 11:36:40.982786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:56744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.057 [2024-11-20 11:36:40.982800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.057 [2024-11-20 11:36:40.982816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:56752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.057 [2024-11-20 11:36:40.982831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.057 [2024-11-20 11:36:40.982847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:56760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.057 [2024-11-20 11:36:40.982862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.057 [2024-11-20 11:36:40.982885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:56768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.057 [2024-11-20 11:36:40.982900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.057 [2024-11-20 11:36:40.982917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:56776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.057 [2024-11-20 11:36:40.982931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.057 [2024-11-20 11:36:40.982947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:56784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.057 [2024-11-20 11:36:40.982962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.057 [2024-11-20 11:36:40.982977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:56792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.057 [2024-11-20 11:36:40.982992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.057 [2024-11-20 11:36:40.983008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:56800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.057 [2024-11-20 11:36:40.983024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.057 [2024-11-20 11:36:40.983041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:56808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.057 [2024-11-20 11:36:40.983055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.057 [2024-11-20 11:36:40.983071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:56816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.057 [2024-11-20 11:36:40.983085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.057 [2024-11-20 11:36:40.983102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:56824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.058 [2024-11-20 11:36:40.983116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.058 [2024-11-20 11:36:40.983133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:56832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.058 [2024-11-20 11:36:40.983157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.058 [2024-11-20 11:36:40.983181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:56840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.058 [2024-11-20 11:36:40.983202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.058 [2024-11-20 11:36:40.983224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:56848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.058 [2024-11-20 11:36:40.983244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.058 [2024-11-20 11:36:40.983267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:56856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.058 [2024-11-20 11:36:40.983287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.058 [2024-11-20 11:36:40.983310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:56864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.058 [2024-11-20 11:36:40.983338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.058 [2024-11-20 11:36:40.983784] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:16.058 [2024-11-20 11:36:40.983830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.058 [2024-11-20 11:36:40.983860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:16.058 [2024-11-20 11:36:40.983885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.058 [2024-11-20 11:36:40.983913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:16.058 [2024-11-20 11:36:40.983932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.058 [2024-11-20 11:36:40.983948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:16.058 [2024-11-20 11:36:40.983962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.058 [2024-11-20 11:36:40.983977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.058 [2024-11-20 11:36:40.983992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.058 [2024-11-20 11:36:40.984014] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe12190 is same with the state(6) to be set 00:24:16.058 [2024-11-20 11:36:40.986014] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:16.058 [2024-11-20 11:36:40.986062] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe12190 (9): Bad file descriptor 00:24:16.058 [2024-11-20 11:36:40.986993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.058 [2024-11-20 11:36:40.987030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe12190 with addr=10.0.0.3, port=4421 00:24:16.058 [2024-11-20 11:36:40.987049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe12190 is same with the state(6) to be set 00:24:16.058 [2024-11-20 11:36:40.987109] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe12190 (9): Bad file descriptor 00:24:16.058 [2024-11-20 11:36:40.987140] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:16.058 [2024-11-20 11:36:40.987156] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:16.058 [2024-11-20 11:36:40.987171] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:16.058 [2024-11-20 11:36:40.987185] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:16.058 [2024-11-20 11:36:40.987201] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:16.058 6554.49 IOPS, 25.60 MiB/s [2024-11-20T11:37:01.647Z] 6583.71 IOPS, 25.72 MiB/s [2024-11-20T11:37:01.647Z] 6608.23 IOPS, 25.81 MiB/s [2024-11-20T11:37:01.647Z] 6649.90 IOPS, 25.98 MiB/s [2024-11-20T11:37:01.647Z] 6686.39 IOPS, 26.12 MiB/s [2024-11-20T11:37:01.647Z] 6721.43 IOPS, 26.26 MiB/s [2024-11-20T11:37:01.647Z] 6753.58 IOPS, 26.38 MiB/s [2024-11-20T11:37:01.647Z] 6786.11 IOPS, 26.51 MiB/s [2024-11-20T11:37:01.647Z] 6820.31 IOPS, 26.64 MiB/s [2024-11-20T11:37:01.647Z] 6854.41 IOPS, 26.78 MiB/s [2024-11-20T11:37:01.647Z] [2024-11-20 11:36:51.105796] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:24:16.058 6887.74 IOPS, 26.91 MiB/s [2024-11-20T11:37:01.647Z] 6921.10 IOPS, 27.04 MiB/s [2024-11-20T11:37:01.647Z] 6951.57 IOPS, 27.15 MiB/s [2024-11-20T11:37:01.647Z] 6980.88 IOPS, 27.27 MiB/s [2024-11-20T11:37:01.647Z] 7005.25 IOPS, 27.36 MiB/s [2024-11-20T11:37:01.647Z] 7029.40 IOPS, 27.46 MiB/s [2024-11-20T11:37:01.647Z] 7053.72 IOPS, 27.55 MiB/s [2024-11-20T11:37:01.647Z] 7074.59 IOPS, 27.64 MiB/s [2024-11-20T11:37:01.647Z] 7096.18 IOPS, 27.72 MiB/s [2024-11-20T11:37:01.647Z] 7113.79 IOPS, 27.79 MiB/s [2024-11-20T11:37:01.647Z] Received shutdown signal, test time was about 56.549250 seconds 00:24:16.058 00:24:16.058 Latency(us) 00:24:16.058 [2024-11-20T11:37:01.647Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:16.058 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:16.058 Verification LBA range: start 0x0 length 0x4000 00:24:16.058 Nvme0n1 : 56.55 7116.95 27.80 0.00 0.00 17953.75 826.65 7046430.72 00:24:16.058 [2024-11-20T11:37:01.647Z] =================================================================================================================== 00:24:16.058 [2024-11-20T11:37:01.647Z] Total : 7116.95 27.80 0.00 0.00 17953.75 826.65 7046430.72 00:24:16.058 11:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:16.317 11:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:24:16.317 11:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:16.317 11:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:24:16.317 11:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:16.317 11:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:24:16.317 11:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:16.317 11:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:24:16.317 11:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:16.317 11:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:16.317 rmmod nvme_tcp 00:24:16.576 rmmod nvme_fabrics 00:24:16.576 rmmod nvme_keyring 00:24:16.576 11:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:16.576 11:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:24:16.576 11:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:24:16.576 11:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@517 -- # '[' -n 95379 ']' 00:24:16.576 11:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # killprocess 95379 00:24:16.576 11:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 95379 ']' 00:24:16.576 11:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 95379 00:24:16.576 11:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:24:16.576 11:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:16.576 11:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95379 00:24:16.576 11:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:16.576 killing process with pid 95379 00:24:16.576 11:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:16.576 11:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95379' 00:24:16.576 11:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 95379 00:24:16.576 11:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 95379 00:24:16.576 11:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:16.577 11:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:16.577 11:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:16.577 11:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:24:16.577 11:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-save 00:24:16.577 11:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:16.577 11:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:24:16.577 11:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:16.577 11:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:16.577 11:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:16.577 11:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:16.835 11:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:16.835 11:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:16.835 11:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:16.835 11:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:16.835 11:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:16.835 11:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:16.835 11:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:16.835 11:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:16.835 11:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:16.835 11:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:16.835 11:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:16.835 11:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:16.835 11:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:16.835 11:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:16.835 11:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:16.835 11:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:24:16.835 00:24:16.835 real 1m1.813s 00:24:16.835 user 2m56.691s 00:24:16.835 sys 0m13.241s 00:24:16.835 11:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:16.835 11:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:24:16.835 ************************************ 00:24:16.835 END TEST nvmf_host_multipath 00:24:16.835 ************************************ 00:24:16.835 11:37:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:24:16.835 11:37:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:16.835 11:37:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:16.835 11:37:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.095 ************************************ 00:24:17.095 START TEST nvmf_timeout 00:24:17.095 ************************************ 00:24:17.095 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:24:17.095 * Looking for test storage... 00:24:17.095 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:17.095 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:17.095 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # lcov --version 00:24:17.095 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:17.095 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:17.095 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:17.095 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:17.095 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:17.095 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:24:17.095 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:24:17.095 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:24:17.095 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:24:17.095 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:24:17.095 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:24:17.095 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:24:17.095 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:17.095 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:24:17.095 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:24:17.095 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:17.095 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:17.095 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:24:17.095 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:24:17.095 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:17.095 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:24:17.095 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:24:17.095 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:24:17.095 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:24:17.095 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:17.095 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:24:17.095 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:24:17.095 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:17.095 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:17.095 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:24:17.095 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:17.095 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:17.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:17.095 --rc genhtml_branch_coverage=1 00:24:17.095 --rc genhtml_function_coverage=1 00:24:17.095 --rc genhtml_legend=1 00:24:17.095 --rc geninfo_all_blocks=1 00:24:17.095 --rc geninfo_unexecuted_blocks=1 00:24:17.095 00:24:17.095 ' 00:24:17.095 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:17.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:17.095 --rc genhtml_branch_coverage=1 00:24:17.096 --rc genhtml_function_coverage=1 00:24:17.096 --rc genhtml_legend=1 00:24:17.096 --rc geninfo_all_blocks=1 00:24:17.096 --rc geninfo_unexecuted_blocks=1 00:24:17.096 00:24:17.096 ' 00:24:17.096 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:17.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:17.096 --rc genhtml_branch_coverage=1 00:24:17.096 --rc genhtml_function_coverage=1 00:24:17.096 --rc genhtml_legend=1 00:24:17.096 --rc geninfo_all_blocks=1 00:24:17.096 --rc geninfo_unexecuted_blocks=1 00:24:17.096 00:24:17.096 ' 00:24:17.096 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:17.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:17.096 --rc genhtml_branch_coverage=1 00:24:17.096 --rc genhtml_function_coverage=1 00:24:17.096 --rc genhtml_legend=1 00:24:17.096 --rc geninfo_all_blocks=1 00:24:17.096 --rc geninfo_unexecuted_blocks=1 00:24:17.096 00:24:17.096 ' 00:24:17.096 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:17.096 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:24:17.096 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:17.096 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:17.096 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:17.096 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:17.096 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:17.096 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:17.096 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:17.096 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:17.096 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:17.096 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:17.096 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:24:17.096 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=806d442c-08ba-4719-b76d-33169283459a 00:24:17.096 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:17.096 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:17.096 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:17.096 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:17.096 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:17.096 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:24:17.096 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:17.096 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:17.096 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:17.096 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.096 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.096 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.096 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:24:17.096 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.096 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:24:17.096 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:17.096 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:17.096 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:17.096 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:17.096 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:17.096 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:17.096 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:17.096 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:17.096 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:17.096 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:17.096 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:17.096 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:17.096 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:17.096 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:24:17.096 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:17.096 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:24:17.096 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:17.096 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:17.096 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:17.096 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:17.096 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:17.096 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:17.096 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:17.096 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:17.096 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:24:17.096 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:24:17.097 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:24:17.097 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:24:17.097 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:24:17.097 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:24:17.097 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:17.097 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:17.097 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:17.097 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:17.097 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:17.097 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:17.097 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:17.097 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:17.097 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:17.097 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:17.097 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:17.097 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:17.097 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:17.097 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:17.097 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:17.097 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:17.097 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:17.097 Cannot find device "nvmf_init_br" 00:24:17.097 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:24:17.097 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:17.097 Cannot find device "nvmf_init_br2" 00:24:17.097 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:24:17.097 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:17.097 Cannot find device "nvmf_tgt_br" 00:24:17.097 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:24:17.097 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:17.356 Cannot find device "nvmf_tgt_br2" 00:24:17.356 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:24:17.356 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:17.356 Cannot find device "nvmf_init_br" 00:24:17.356 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:24:17.356 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:17.356 Cannot find device "nvmf_init_br2" 00:24:17.356 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:24:17.356 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:17.356 Cannot find device "nvmf_tgt_br" 00:24:17.356 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:24:17.356 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:17.356 Cannot find device "nvmf_tgt_br2" 00:24:17.356 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:24:17.356 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:17.356 Cannot find device "nvmf_br" 00:24:17.356 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:24:17.356 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:17.356 Cannot find device "nvmf_init_if" 00:24:17.356 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:24:17.356 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:17.356 Cannot find device "nvmf_init_if2" 00:24:17.356 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:24:17.356 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:17.356 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:17.356 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:24:17.356 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:17.356 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:17.356 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:24:17.356 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:17.356 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:17.356 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:17.356 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:17.356 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:17.356 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:17.356 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:17.356 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:17.356 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:17.356 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:17.356 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:17.356 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:17.356 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:17.356 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:17.356 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:17.356 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:17.356 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:17.356 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:17.356 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:17.356 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:17.356 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:17.356 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:17.356 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:17.356 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:17.356 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:17.615 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:17.615 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:17.615 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:17.615 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:17.615 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:17.615 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:17.615 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:17.615 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:17.615 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:17.615 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.094 ms 00:24:17.615 00:24:17.615 --- 10.0.0.3 ping statistics --- 00:24:17.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:17.615 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:24:17.615 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:17.616 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:17.616 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:24:17.616 00:24:17.616 --- 10.0.0.4 ping statistics --- 00:24:17.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:17.616 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:24:17.616 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:17.616 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:17.616 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:24:17.616 00:24:17.616 --- 10.0.0.1 ping statistics --- 00:24:17.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:17.616 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:24:17.616 11:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:17.616 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:17.616 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:24:17.616 00:24:17.616 --- 10.0.0.2 ping statistics --- 00:24:17.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:17.616 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:24:17.616 11:37:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:17.616 11:37:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@461 -- # return 0 00:24:17.616 11:37:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:17.616 11:37:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:17.616 11:37:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:17.616 11:37:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:17.616 11:37:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:17.616 11:37:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:17.616 11:37:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:17.616 11:37:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:24:17.616 11:37:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:17.616 11:37:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:17.616 11:37:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:17.616 11:37:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # nvmfpid=96781 00:24:17.616 11:37:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:17.616 11:37:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # waitforlisten 96781 00:24:17.616 11:37:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 96781 ']' 00:24:17.616 11:37:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:17.616 11:37:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:17.616 11:37:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:17.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:17.616 11:37:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:17.616 11:37:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:17.616 [2024-11-20 11:37:03.094571] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:24:17.616 [2024-11-20 11:37:03.094697] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:17.874 [2024-11-20 11:37:03.248145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:17.874 [2024-11-20 11:37:03.287759] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:17.874 [2024-11-20 11:37:03.288017] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:17.874 [2024-11-20 11:37:03.288250] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:17.874 [2024-11-20 11:37:03.288519] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:17.874 [2024-11-20 11:37:03.288705] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:17.874 [2024-11-20 11:37:03.289840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:17.874 [2024-11-20 11:37:03.289853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:17.874 11:37:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:17.874 11:37:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:24:17.874 11:37:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:17.874 11:37:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:17.874 11:37:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:17.874 11:37:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:17.874 11:37:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:17.874 11:37:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:18.133 [2024-11-20 11:37:03.671976] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:18.133 11:37:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:18.700 Malloc0 00:24:18.700 11:37:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:18.958 11:37:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:19.217 11:37:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:19.475 [2024-11-20 11:37:05.023022] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:19.475 11:37:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=96858 00:24:19.475 11:37:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 96858 /var/tmp/bdevperf.sock 00:24:19.475 11:37:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 96858 ']' 00:24:19.475 11:37:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:24:19.475 11:37:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:19.475 11:37:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:19.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:19.475 11:37:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:19.475 11:37:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:19.475 11:37:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:19.733 [2024-11-20 11:37:05.105972] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:24:19.733 [2024-11-20 11:37:05.106080] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96858 ] 00:24:19.733 [2024-11-20 11:37:05.259856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:19.991 [2024-11-20 11:37:05.321755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:19.991 11:37:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:19.991 11:37:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:24:19.991 11:37:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:20.249 11:37:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:24:20.507 NVMe0n1 00:24:20.508 11:37:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=96893 00:24:20.508 11:37:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:20.508 11:37:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:24:20.766 Running I/O for 10 seconds... 00:24:21.702 11:37:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:21.964 8415.00 IOPS, 32.87 MiB/s [2024-11-20T11:37:07.553Z] [2024-11-20 11:37:07.387379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be96b0 is same with the state(6) to be set 00:24:21.964 [2024-11-20 11:37:07.387445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be96b0 is same with the state(6) to be set 00:24:21.964 [2024-11-20 11:37:07.387457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be96b0 is same with the state(6) to be set 00:24:21.964 [2024-11-20 11:37:07.387466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be96b0 is same with the state(6) to be set 00:24:21.964 [2024-11-20 11:37:07.387474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be96b0 is same with the state(6) to be set 00:24:21.964 [2024-11-20 11:37:07.387483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be96b0 is same with the state(6) to be set 00:24:21.964 [2024-11-20 11:37:07.387491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be96b0 is same with the state(6) to be set 00:24:21.964 [2024-11-20 11:37:07.387500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be96b0 is same with the state(6) to be set 00:24:21.964 [2024-11-20 11:37:07.387508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be96b0 is same with the state(6) to be set 00:24:21.964 [2024-11-20 11:37:07.387516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be96b0 is same with the state(6) to be set 00:24:21.964 [2024-11-20 11:37:07.387524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be96b0 is same with the state(6) to be set 00:24:21.964 [2024-11-20 11:37:07.387533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be96b0 is same with the state(6) to be set 00:24:21.964 [2024-11-20 11:37:07.387541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be96b0 is same with the state(6) to be set 00:24:21.964 [2024-11-20 11:37:07.387549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be96b0 is same with the state(6) to be set 00:24:21.964 [2024-11-20 11:37:07.387557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be96b0 is same with the state(6) to be set 00:24:21.964 [2024-11-20 11:37:07.387566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be96b0 is same with the state(6) to be set 00:24:21.964 [2024-11-20 11:37:07.387574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be96b0 is same with the state(6) to be set 00:24:21.964 [2024-11-20 11:37:07.387582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be96b0 is same with the state(6) to be set 00:24:21.964 [2024-11-20 11:37:07.387590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be96b0 is same with the state(6) to be set 00:24:21.964 [2024-11-20 11:37:07.387598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be96b0 is same with the state(6) to be set 00:24:21.964 [2024-11-20 11:37:07.387606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be96b0 is same with the state(6) to be set 00:24:21.964 [2024-11-20 11:37:07.387629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be96b0 is same with the state(6) to be set 00:24:21.964 [2024-11-20 11:37:07.387639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be96b0 is same with the state(6) to be set 00:24:21.964 [2024-11-20 11:37:07.387648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be96b0 is same with the state(6) to be set 00:24:21.964 [2024-11-20 11:37:07.387656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be96b0 is same with the state(6) to be set 00:24:21.964 [2024-11-20 11:37:07.387664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be96b0 is same with the state(6) to be set 00:24:21.964 [2024-11-20 11:37:07.387672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be96b0 is same with the state(6) to be set 00:24:21.964 [2024-11-20 11:37:07.387681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be96b0 is same with the state(6) to be set 00:24:21.964 [2024-11-20 11:37:07.387689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be96b0 is same with the state(6) to be set 00:24:21.964 [2024-11-20 11:37:07.387697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be96b0 is same with the state(6) to be set 00:24:21.964 [2024-11-20 11:37:07.387706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be96b0 is same with the state(6) to be set 00:24:21.964 [2024-11-20 11:37:07.387714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be96b0 is same with the state(6) to be set 00:24:21.964 [2024-11-20 11:37:07.387722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be96b0 is same with the state(6) to be set 00:24:21.964 [2024-11-20 11:37:07.387731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be96b0 is same with the state(6) to be set 00:24:21.964 [2024-11-20 11:37:07.387740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be96b0 is same with the state(6) to be set 00:24:21.964 [2024-11-20 11:37:07.387748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be96b0 is same with the state(6) to be set 00:24:21.964 [2024-11-20 11:37:07.387756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be96b0 is same with the state(6) to be set 00:24:21.964 [2024-11-20 11:37:07.387764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be96b0 is same with the state(6) to be set 00:24:21.964 [2024-11-20 11:37:07.387773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be96b0 is same with the state(6) to be set 00:24:21.964 [2024-11-20 11:37:07.387781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be96b0 is same with the state(6) to be set 00:24:21.964 [2024-11-20 11:37:07.389776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:80840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.964 [2024-11-20 11:37:07.389836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.964 [2024-11-20 11:37:07.389869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:81024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.964 [2024-11-20 11:37:07.389881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.964 [2024-11-20 11:37:07.389895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:81032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.964 [2024-11-20 11:37:07.389905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.964 [2024-11-20 11:37:07.389917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:81040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.964 [2024-11-20 11:37:07.389927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.964 [2024-11-20 11:37:07.389938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:81048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.964 [2024-11-20 11:37:07.389948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.964 [2024-11-20 11:37:07.390206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:81056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.964 [2024-11-20 11:37:07.390233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.965 [2024-11-20 11:37:07.390247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:81064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.965 [2024-11-20 11:37:07.390387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.965 [2024-11-20 11:37:07.390608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:81072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.965 [2024-11-20 11:37:07.390640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.965 [2024-11-20 11:37:07.390654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:81080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.965 [2024-11-20 11:37:07.390664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.965 [2024-11-20 11:37:07.390676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:81088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.965 [2024-11-20 11:37:07.390686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.965 [2024-11-20 11:37:07.390698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:81096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.965 [2024-11-20 11:37:07.390707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.965 [2024-11-20 11:37:07.390718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:81104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.965 [2024-11-20 11:37:07.390728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.965 [2024-11-20 11:37:07.390855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:81112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.965 [2024-11-20 11:37:07.391000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.965 [2024-11-20 11:37:07.391019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:81120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.965 [2024-11-20 11:37:07.391140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.965 [2024-11-20 11:37:07.391155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:81128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.965 [2024-11-20 11:37:07.391165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.965 [2024-11-20 11:37:07.391177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:81136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.965 [2024-11-20 11:37:07.391187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.965 [2024-11-20 11:37:07.391509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:81144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.965 [2024-11-20 11:37:07.391523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.965 [2024-11-20 11:37:07.391534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:81152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.965 [2024-11-20 11:37:07.391544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.965 [2024-11-20 11:37:07.391556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:81160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.965 [2024-11-20 11:37:07.391565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.965 [2024-11-20 11:37:07.391577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:81168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.965 [2024-11-20 11:37:07.391586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.965 [2024-11-20 11:37:07.391864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:81176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.965 [2024-11-20 11:37:07.391952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.965 [2024-11-20 11:37:07.391967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:81184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.965 [2024-11-20 11:37:07.391977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.965 [2024-11-20 11:37:07.391988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:81192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.965 [2024-11-20 11:37:07.391998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.965 [2024-11-20 11:37:07.392010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:81200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.965 [2024-11-20 11:37:07.392019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.965 [2024-11-20 11:37:07.392030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:81208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.965 [2024-11-20 11:37:07.392135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.965 [2024-11-20 11:37:07.392155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:81216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.965 [2024-11-20 11:37:07.392165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.965 [2024-11-20 11:37:07.392177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:81224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.965 [2024-11-20 11:37:07.392324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.965 [2024-11-20 11:37:07.392461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:81232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.965 [2024-11-20 11:37:07.392484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.965 [2024-11-20 11:37:07.392763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:81240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.965 [2024-11-20 11:37:07.392790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.965 [2024-11-20 11:37:07.392804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:81248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.965 [2024-11-20 11:37:07.392814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.965 [2024-11-20 11:37:07.392825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:81256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.965 [2024-11-20 11:37:07.392835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.965 [2024-11-20 11:37:07.392847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:81264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.965 [2024-11-20 11:37:07.392856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.965 [2024-11-20 11:37:07.392867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:81272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.965 [2024-11-20 11:37:07.392877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.965 [2024-11-20 11:37:07.393138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:81280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.965 [2024-11-20 11:37:07.393257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.965 [2024-11-20 11:37:07.393279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:81288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.965 [2024-11-20 11:37:07.393290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.965 [2024-11-20 11:37:07.393301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:81296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.965 [2024-11-20 11:37:07.393571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.965 [2024-11-20 11:37:07.393830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:81304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.965 [2024-11-20 11:37:07.393858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.965 [2024-11-20 11:37:07.393873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:81312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.965 [2024-11-20 11:37:07.393883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.965 [2024-11-20 11:37:07.393895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:81320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.965 [2024-11-20 11:37:07.393905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.965 [2024-11-20 11:37:07.393916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:81328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.965 [2024-11-20 11:37:07.393926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.965 [2024-11-20 11:37:07.393938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:81336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.966 [2024-11-20 11:37:07.394079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.966 [2024-11-20 11:37:07.394212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:81344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.966 [2024-11-20 11:37:07.394227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.966 [2024-11-20 11:37:07.394242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:81352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.966 [2024-11-20 11:37:07.394255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.966 [2024-11-20 11:37:07.394270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:81360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.966 [2024-11-20 11:37:07.394283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.966 [2024-11-20 11:37:07.394703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:81368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.966 [2024-11-20 11:37:07.394720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.966 [2024-11-20 11:37:07.394736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:81376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.966 [2024-11-20 11:37:07.394749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.966 [2024-11-20 11:37:07.394763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:81384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.966 [2024-11-20 11:37:07.394777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.966 [2024-11-20 11:37:07.395183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:81392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.966 [2024-11-20 11:37:07.395197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.966 [2024-11-20 11:37:07.395212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:81400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.966 [2024-11-20 11:37:07.395225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.966 [2024-11-20 11:37:07.395240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:81408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.966 [2024-11-20 11:37:07.395253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.966 [2024-11-20 11:37:07.395654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:81416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.966 [2024-11-20 11:37:07.395676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.966 [2024-11-20 11:37:07.395692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:81424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.966 [2024-11-20 11:37:07.395705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.966 [2024-11-20 11:37:07.395720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:81432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.966 [2024-11-20 11:37:07.396105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.966 [2024-11-20 11:37:07.396135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:81440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.966 [2024-11-20 11:37:07.396149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.966 [2024-11-20 11:37:07.396166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:81448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.966 [2024-11-20 11:37:07.396179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.966 [2024-11-20 11:37:07.396202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.966 [2024-11-20 11:37:07.396215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.966 [2024-11-20 11:37:07.396325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:81464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.966 [2024-11-20 11:37:07.396343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.966 [2024-11-20 11:37:07.396358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:81472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.966 [2024-11-20 11:37:07.396669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.966 [2024-11-20 11:37:07.396692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:81480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.966 [2024-11-20 11:37:07.396705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.966 [2024-11-20 11:37:07.396730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:81488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.966 [2024-11-20 11:37:07.396993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.966 [2024-11-20 11:37:07.397023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:81496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.966 [2024-11-20 11:37:07.397037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.966 [2024-11-20 11:37:07.397052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.966 [2024-11-20 11:37:07.397064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.966 [2024-11-20 11:37:07.397079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:81512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.966 [2024-11-20 11:37:07.397477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.966 [2024-11-20 11:37:07.397504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.966 [2024-11-20 11:37:07.397515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.966 [2024-11-20 11:37:07.397526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:80848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.966 [2024-11-20 11:37:07.397537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.966 [2024-11-20 11:37:07.397548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:80856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.966 [2024-11-20 11:37:07.397558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.966 [2024-11-20 11:37:07.397842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:80864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.966 [2024-11-20 11:37:07.397876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.966 [2024-11-20 11:37:07.397891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:80872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.966 [2024-11-20 11:37:07.397901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.966 [2024-11-20 11:37:07.397912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:80880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.966 [2024-11-20 11:37:07.397922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.966 [2024-11-20 11:37:07.397934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:80888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.966 [2024-11-20 11:37:07.398166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.966 [2024-11-20 11:37:07.398194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:80896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.966 [2024-11-20 11:37:07.398205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.966 [2024-11-20 11:37:07.398216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.966 [2024-11-20 11:37:07.398226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.966 [2024-11-20 11:37:07.398237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:80904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.966 [2024-11-20 11:37:07.398246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.966 [2024-11-20 11:37:07.398502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:80912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.966 [2024-11-20 11:37:07.398526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.966 [2024-11-20 11:37:07.398539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:80920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.966 [2024-11-20 11:37:07.398549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.967 [2024-11-20 11:37:07.398814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:80928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.967 [2024-11-20 11:37:07.398836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.967 [2024-11-20 11:37:07.398849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:80936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.967 [2024-11-20 11:37:07.398859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.967 [2024-11-20 11:37:07.398870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:80944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.967 [2024-11-20 11:37:07.398879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.967 [2024-11-20 11:37:07.398891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:80952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.967 [2024-11-20 11:37:07.398906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.967 [2024-11-20 11:37:07.399018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:80960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.967 [2024-11-20 11:37:07.399035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.967 [2024-11-20 11:37:07.399048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:80968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.967 [2024-11-20 11:37:07.399314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.967 [2024-11-20 11:37:07.399538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:80976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.967 [2024-11-20 11:37:07.399561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.967 [2024-11-20 11:37:07.399576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:80984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.967 [2024-11-20 11:37:07.399586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.967 [2024-11-20 11:37:07.399599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:80992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.967 [2024-11-20 11:37:07.399608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.967 [2024-11-20 11:37:07.399734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:81000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.967 [2024-11-20 11:37:07.399749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.967 [2024-11-20 11:37:07.399760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:81008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.967 [2024-11-20 11:37:07.399900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.967 [2024-11-20 11:37:07.400015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:81016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.967 [2024-11-20 11:37:07.400028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.967 [2024-11-20 11:37:07.400041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:81536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.967 [2024-11-20 11:37:07.400050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.967 [2024-11-20 11:37:07.400062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:81544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.967 [2024-11-20 11:37:07.400210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.967 [2024-11-20 11:37:07.400324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:81552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.967 [2024-11-20 11:37:07.400335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.967 [2024-11-20 11:37:07.400347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:81560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.967 [2024-11-20 11:37:07.400358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.967 [2024-11-20 11:37:07.400369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:81568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.967 [2024-11-20 11:37:07.400636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.967 [2024-11-20 11:37:07.400666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:81576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.967 [2024-11-20 11:37:07.400678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.967 [2024-11-20 11:37:07.400689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:81584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.967 [2024-11-20 11:37:07.400699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.967 [2024-11-20 11:37:07.400711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:81592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.967 [2024-11-20 11:37:07.400959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.967 [2024-11-20 11:37:07.400984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:81600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.967 [2024-11-20 11:37:07.400995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.967 [2024-11-20 11:37:07.401006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:81608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.967 [2024-11-20 11:37:07.401016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.967 [2024-11-20 11:37:07.401028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:81616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.967 [2024-11-20 11:37:07.401037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.967 [2024-11-20 11:37:07.401291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:81624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.967 [2024-11-20 11:37:07.401309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.967 [2024-11-20 11:37:07.401322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.967 [2024-11-20 11:37:07.401332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.967 [2024-11-20 11:37:07.401344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:81640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.967 [2024-11-20 11:37:07.401355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.967 [2024-11-20 11:37:07.401366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:81648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.967 [2024-11-20 11:37:07.401592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.967 [2024-11-20 11:37:07.401629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:81656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.967 [2024-11-20 11:37:07.401642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.967 [2024-11-20 11:37:07.401653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:81664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.967 [2024-11-20 11:37:07.401663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.967 [2024-11-20 11:37:07.401674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:81672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.967 [2024-11-20 11:37:07.401685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.967 [2024-11-20 11:37:07.402050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:81680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.967 [2024-11-20 11:37:07.402068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.967 [2024-11-20 11:37:07.402081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:81688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.967 [2024-11-20 11:37:07.402091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.967 [2024-11-20 11:37:07.402103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:81696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.967 [2024-11-20 11:37:07.402384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.967 [2024-11-20 11:37:07.402399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:81704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.967 [2024-11-20 11:37:07.402409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.967 [2024-11-20 11:37:07.402420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:81712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.968 [2024-11-20 11:37:07.402431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.968 [2024-11-20 11:37:07.402442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:81720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.968 [2024-11-20 11:37:07.402547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.968 [2024-11-20 11:37:07.402571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:81728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.968 [2024-11-20 11:37:07.402583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.968 [2024-11-20 11:37:07.402830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:81736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.968 [2024-11-20 11:37:07.402854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.968 [2024-11-20 11:37:07.402867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:81744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.968 [2024-11-20 11:37:07.402878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.968 [2024-11-20 11:37:07.402889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:81752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.968 [2024-11-20 11:37:07.402899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.968 [2024-11-20 11:37:07.402910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:81760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.968 [2024-11-20 11:37:07.403144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.968 [2024-11-20 11:37:07.403175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.968 [2024-11-20 11:37:07.403187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.968 [2024-11-20 11:37:07.403198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:81776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.968 [2024-11-20 11:37:07.403209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.968 [2024-11-20 11:37:07.403220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:81784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.968 [2024-11-20 11:37:07.403230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.968 [2024-11-20 11:37:07.403558] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.968 [2024-11-20 11:37:07.403574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81792 len:8 PRP1 0x0 PRP2 0x0 00:24:21.968 [2024-11-20 11:37:07.403584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.968 [2024-11-20 11:37:07.403714] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.968 [2024-11-20 11:37:07.403735] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.968 [2024-11-20 11:37:07.403972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81800 len:8 PRP1 0x0 PRP2 0x0 00:24:21.968 [2024-11-20 11:37:07.403987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.968 [2024-11-20 11:37:07.403999] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.968 [2024-11-20 11:37:07.404007] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.968 [2024-11-20 11:37:07.404015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81808 len:8 PRP1 0x0 PRP2 0x0 00:24:21.968 [2024-11-20 11:37:07.404024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.968 [2024-11-20 11:37:07.404034] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.968 [2024-11-20 11:37:07.404309] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.968 [2024-11-20 11:37:07.404322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81816 len:8 PRP1 0x0 PRP2 0x0 00:24:21.968 [2024-11-20 11:37:07.404332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.968 [2024-11-20 11:37:07.404344] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.968 [2024-11-20 11:37:07.404353] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.968 [2024-11-20 11:37:07.404361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81824 len:8 PRP1 0x0 PRP2 0x0 00:24:21.968 [2024-11-20 11:37:07.404370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.968 [2024-11-20 11:37:07.404498] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.968 [2024-11-20 11:37:07.404513] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.968 [2024-11-20 11:37:07.404750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81832 len:8 PRP1 0x0 PRP2 0x0 00:24:21.968 [2024-11-20 11:37:07.404775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.968 [2024-11-20 11:37:07.404787] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.968 [2024-11-20 11:37:07.404795] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.968 [2024-11-20 11:37:07.404803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81840 len:8 PRP1 0x0 PRP2 0x0 00:24:21.968 [2024-11-20 11:37:07.404812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.968 [2024-11-20 11:37:07.404822] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.968 [2024-11-20 11:37:07.404830] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.968 [2024-11-20 11:37:07.404838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81848 len:8 PRP1 0x0 PRP2 0x0 00:24:21.968 [2024-11-20 11:37:07.405073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.968 [2024-11-20 11:37:07.405099] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.968 [2024-11-20 11:37:07.405108] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.968 [2024-11-20 11:37:07.405118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81856 len:8 PRP1 0x0 PRP2 0x0 00:24:21.968 [2024-11-20 11:37:07.405128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.968 [2024-11-20 11:37:07.405472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.968 [2024-11-20 11:37:07.405502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.968 [2024-11-20 11:37:07.405515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.968 [2024-11-20 11:37:07.405524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.968 [2024-11-20 11:37:07.405535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.968 [2024-11-20 11:37:07.405544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.968 [2024-11-20 11:37:07.405554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.968 [2024-11-20 11:37:07.405563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.968 [2024-11-20 11:37:07.405573] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1408f50 is same with the state(6) to be set 00:24:21.968 [2024-11-20 11:37:07.406020] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:24:21.968 [2024-11-20 11:37:07.406061] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1408f50 (9): Bad file descriptor 00:24:21.968 [2024-11-20 11:37:07.406371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.968 [2024-11-20 11:37:07.406408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1408f50 with addr=10.0.0.3, port=4420 00:24:21.968 [2024-11-20 11:37:07.406421] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1408f50 is same with the state(6) to be set 00:24:21.968 [2024-11-20 11:37:07.406444] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1408f50 (9): Bad file descriptor 00:24:21.968 [2024-11-20 11:37:07.406463] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:24:21.968 [2024-11-20 11:37:07.406733] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:24:21.968 [2024-11-20 11:37:07.406748] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:24:21.968 [2024-11-20 11:37:07.406761] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:24:21.968 [2024-11-20 11:37:07.406773] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:24:21.968 11:37:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:24:23.836 5052.50 IOPS, 19.74 MiB/s [2024-11-20T11:37:09.426Z] 3368.33 IOPS, 13.16 MiB/s [2024-11-20T11:37:09.426Z] [2024-11-20 11:37:09.406922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.837 [2024-11-20 11:37:09.406998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1408f50 with addr=10.0.0.3, port=4420 00:24:23.837 [2024-11-20 11:37:09.407015] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1408f50 is same with the state(6) to be set 00:24:23.837 [2024-11-20 11:37:09.407043] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1408f50 (9): Bad file descriptor 00:24:23.837 [2024-11-20 11:37:09.407066] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:24:23.837 [2024-11-20 11:37:09.407077] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:24:23.837 [2024-11-20 11:37:09.407089] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:24:23.837 [2024-11-20 11:37:09.407101] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:24:23.837 [2024-11-20 11:37:09.407113] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:24:23.837 11:37:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:24:23.837 11:37:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:24.095 11:37:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:24:24.353 11:37:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:24:24.353 11:37:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:24:24.353 11:37:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:24:24.353 11:37:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:24:24.611 11:37:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:24:24.611 11:37:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:24:25.803 2526.25 IOPS, 9.87 MiB/s [2024-11-20T11:37:11.664Z] 2021.00 IOPS, 7.89 MiB/s [2024-11-20T11:37:11.664Z] [2024-11-20 11:37:11.407259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.075 [2024-11-20 11:37:11.407343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1408f50 with addr=10.0.0.3, port=4420 00:24:26.075 [2024-11-20 11:37:11.407362] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1408f50 is same with the state(6) to be set 00:24:26.075 [2024-11-20 11:37:11.407390] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1408f50 (9): Bad file descriptor 00:24:26.075 [2024-11-20 11:37:11.407410] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:24:26.075 [2024-11-20 11:37:11.407421] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:24:26.075 [2024-11-20 11:37:11.407433] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:24:26.075 [2024-11-20 11:37:11.407445] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:24:26.075 [2024-11-20 11:37:11.407457] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:24:28.044 1684.17 IOPS, 6.58 MiB/s [2024-11-20T11:37:13.633Z] 1443.57 IOPS, 5.64 MiB/s [2024-11-20T11:37:13.633Z] [2024-11-20 11:37:13.407496] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:24:28.044 [2024-11-20 11:37:13.407567] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:24:28.044 [2024-11-20 11:37:13.407581] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:24:28.044 [2024-11-20 11:37:13.407592] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:24:28.044 [2024-11-20 11:37:13.407605] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:24:28.977 1263.12 IOPS, 4.93 MiB/s 00:24:28.978 Latency(us) 00:24:28.978 [2024-11-20T11:37:14.567Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:28.978 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:28.978 Verification LBA range: start 0x0 length 0x4000 00:24:28.978 NVMe0n1 : 8.21 1231.18 4.81 15.60 0.00 102731.25 2338.44 7046430.72 00:24:28.978 [2024-11-20T11:37:14.567Z] =================================================================================================================== 00:24:28.978 [2024-11-20T11:37:14.567Z] Total : 1231.18 4.81 15.60 0.00 102731.25 2338.44 7046430.72 00:24:28.978 { 00:24:28.978 "results": [ 00:24:28.978 { 00:24:28.978 "job": "NVMe0n1", 00:24:28.978 "core_mask": "0x4", 00:24:28.978 "workload": "verify", 00:24:28.978 "status": "finished", 00:24:28.978 "verify_range": { 00:24:28.978 "start": 0, 00:24:28.978 "length": 16384 00:24:28.978 }, 00:24:28.978 "queue_depth": 128, 00:24:28.978 "io_size": 4096, 00:24:28.978 "runtime": 8.207602, 00:24:28.978 "iops": 1231.1756832263552, 00:24:28.978 "mibps": 4.80928001260295, 00:24:28.978 "io_failed": 128, 00:24:28.978 "io_timeout": 0, 00:24:28.978 "avg_latency_us": 102731.24752965006, 00:24:28.978 "min_latency_us": 2338.4436363636364, 00:24:28.978 "max_latency_us": 7046430.72 00:24:28.978 } 00:24:28.978 ], 00:24:28.978 "core_count": 1 00:24:28.978 } 00:24:29.544 11:37:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:24:29.544 11:37:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:29.544 11:37:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:24:30.109 11:37:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:24:30.109 11:37:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:24:30.109 11:37:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:24:30.109 11:37:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:24:30.368 11:37:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:24:30.368 11:37:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 96893 00:24:30.368 11:37:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 96858 00:24:30.368 11:37:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 96858 ']' 00:24:30.368 11:37:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 96858 00:24:30.368 11:37:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:24:30.368 11:37:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:30.368 11:37:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96858 00:24:30.368 11:37:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:30.368 11:37:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:30.368 killing process with pid 96858 00:24:30.368 11:37:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96858' 00:24:30.368 11:37:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 96858 00:24:30.368 11:37:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 96858 00:24:30.368 Received shutdown signal, test time was about 9.691215 seconds 00:24:30.368 00:24:30.368 Latency(us) 00:24:30.368 [2024-11-20T11:37:15.957Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:30.368 [2024-11-20T11:37:15.957Z] =================================================================================================================== 00:24:30.368 [2024-11-20T11:37:15.957Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:30.627 11:37:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:30.886 [2024-11-20 11:37:16.260649] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:30.886 11:37:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=97052 00:24:30.886 11:37:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:24:30.886 11:37:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 97052 /var/tmp/bdevperf.sock 00:24:30.886 11:37:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 97052 ']' 00:24:30.886 11:37:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:30.886 11:37:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:30.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:30.886 11:37:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:30.886 11:37:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:30.886 11:37:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:30.886 [2024-11-20 11:37:16.333124] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:24:30.886 [2024-11-20 11:37:16.333221] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97052 ] 00:24:31.144 [2024-11-20 11:37:16.482761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:31.144 [2024-11-20 11:37:16.522155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:31.144 11:37:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:31.144 11:37:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:24:31.144 11:37:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:31.402 11:37:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:24:31.660 NVMe0n1 00:24:31.660 11:37:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=97080 00:24:31.660 11:37:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:31.660 11:37:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:24:31.919 Running I/O for 10 seconds... 00:24:32.870 11:37:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:33.184 8694.00 IOPS, 33.96 MiB/s [2024-11-20T11:37:18.773Z] [2024-11-20 11:37:18.531881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c419a0 is same with the state(6) to be set 00:24:33.184 [2024-11-20 11:37:18.531949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c419a0 is same with the state(6) to be set 00:24:33.184 [2024-11-20 11:37:18.531960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c419a0 is same with the state(6) to be set 00:24:33.184 [2024-11-20 11:37:18.531969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c419a0 is same with the state(6) to be set 00:24:33.184 [2024-11-20 11:37:18.531978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c419a0 is same with the state(6) to be set 00:24:33.184 [2024-11-20 11:37:18.531986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c419a0 is same with the state(6) to be set 00:24:33.184 [2024-11-20 11:37:18.531995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c419a0 is same with the state(6) to be set 00:24:33.184 [2024-11-20 11:37:18.532003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c419a0 is same with the state(6) to be set 00:24:33.185 [2024-11-20 11:37:18.532011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c419a0 is same with the state(6) to be set 00:24:33.185 [2024-11-20 11:37:18.532019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c419a0 is same with the state(6) to be set 00:24:33.185 [2024-11-20 11:37:18.532027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c419a0 is same with the state(6) to be set 00:24:33.185 [2024-11-20 11:37:18.532035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c419a0 is same with the state(6) to be set 00:24:33.185 [2024-11-20 11:37:18.532044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c419a0 is same with the state(6) to be set 00:24:33.185 [2024-11-20 11:37:18.532052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c419a0 is same with the state(6) to be set 00:24:33.185 [2024-11-20 11:37:18.532060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c419a0 is same with the state(6) to be set 00:24:33.185 [2024-11-20 11:37:18.532070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c419a0 is same with the state(6) to be set 00:24:33.185 [2024-11-20 11:37:18.532079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c419a0 is same with the state(6) to be set 00:24:33.185 [2024-11-20 11:37:18.532087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c419a0 is same with the state(6) to be set 00:24:33.185 [2024-11-20 11:37:18.532095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c419a0 is same with the state(6) to be set 00:24:33.185 [2024-11-20 11:37:18.532103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c419a0 is same with the state(6) to be set 00:24:33.185 [2024-11-20 11:37:18.532111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c419a0 is same with the state(6) to be set 00:24:33.185 [2024-11-20 11:37:18.532119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c419a0 is same with the state(6) to be set 00:24:33.185 [2024-11-20 11:37:18.532127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c419a0 is same with the state(6) to be set 00:24:33.185 [2024-11-20 11:37:18.532135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c419a0 is same with the state(6) to be set 00:24:33.185 [2024-11-20 11:37:18.532143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c419a0 is same with the state(6) to be set 00:24:33.185 [2024-11-20 11:37:18.532151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c419a0 is same with the state(6) to be set 00:24:33.185 [2024-11-20 11:37:18.532159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c419a0 is same with the state(6) to be set 00:24:33.185 [2024-11-20 11:37:18.532169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c419a0 is same with the state(6) to be set 00:24:33.185 [2024-11-20 11:37:18.532177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c419a0 is same with the state(6) to be set 00:24:33.185 [2024-11-20 11:37:18.532185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c419a0 is same with the state(6) to be set 00:24:33.185 [2024-11-20 11:37:18.532194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c419a0 is same with the state(6) to be set 00:24:33.185 [2024-11-20 11:37:18.532202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c419a0 is same with the state(6) to be set 00:24:33.185 [2024-11-20 11:37:18.532210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c419a0 is same with the state(6) to be set 00:24:33.185 [2024-11-20 11:37:18.532220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c419a0 is same with the state(6) to be set 00:24:33.185 [2024-11-20 11:37:18.532228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c419a0 is same with the state(6) to be set 00:24:33.185 [2024-11-20 11:37:18.532237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c419a0 is same with the state(6) to be set 00:24:33.185 [2024-11-20 11:37:18.532245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c419a0 is same with the state(6) to be set 00:24:33.185 [2024-11-20 11:37:18.532253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c419a0 is same with the state(6) to be set 00:24:33.185 [2024-11-20 11:37:18.532261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c419a0 is same with the state(6) to be set 00:24:33.185 [2024-11-20 11:37:18.532270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c419a0 is same with the state(6) to be set 00:24:33.185 [2024-11-20 11:37:18.532278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c419a0 is same with the state(6) to be set 00:24:33.185 [2024-11-20 11:37:18.532286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c419a0 is same with the state(6) to be set 00:24:33.185 [2024-11-20 11:37:18.532294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c419a0 is same with the state(6) to be set 00:24:33.185 [2024-11-20 11:37:18.532303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c419a0 is same with the state(6) to be set 00:24:33.185 [2024-11-20 11:37:18.532311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c419a0 is same with the state(6) to be set 00:24:33.185 [2024-11-20 11:37:18.532319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c419a0 is same with the state(6) to be set 00:24:33.185 [2024-11-20 11:37:18.532328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c419a0 is same with the state(6) to be set 00:24:33.185 [2024-11-20 11:37:18.532336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c419a0 is same with the state(6) to be set 00:24:33.185 [2024-11-20 11:37:18.532344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c419a0 is same with the state(6) to be set 00:24:33.185 [2024-11-20 11:37:18.535024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.185 [2024-11-20 11:37:18.535070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.185 [2024-11-20 11:37:18.535102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:80968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.185 [2024-11-20 11:37:18.535119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.185 [2024-11-20 11:37:18.535140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:80976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.185 [2024-11-20 11:37:18.535155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.185 [2024-11-20 11:37:18.535173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:80984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.185 [2024-11-20 11:37:18.535187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.185 [2024-11-20 11:37:18.535206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:80992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.185 [2024-11-20 11:37:18.535221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.185 [2024-11-20 11:37:18.535239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:81000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.185 [2024-11-20 11:37:18.535254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.185 [2024-11-20 11:37:18.535272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:81008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.185 [2024-11-20 11:37:18.535286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.185 [2024-11-20 11:37:18.535305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:81016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.185 [2024-11-20 11:37:18.535319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.185 [2024-11-20 11:37:18.535337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:81024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.185 [2024-11-20 11:37:18.535352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.185 [2024-11-20 11:37:18.535370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.185 [2024-11-20 11:37:18.535385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.185 [2024-11-20 11:37:18.535403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:81040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.185 [2024-11-20 11:37:18.535418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.185 [2024-11-20 11:37:18.535437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:81048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.185 [2024-11-20 11:37:18.535451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.185 [2024-11-20 11:37:18.535469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:81056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.185 [2024-11-20 11:37:18.535484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.186 [2024-11-20 11:37:18.535504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:81064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.186 [2024-11-20 11:37:18.535519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.186 [2024-11-20 11:37:18.535538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:81072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.186 [2024-11-20 11:37:18.535552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.186 [2024-11-20 11:37:18.535579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.186 [2024-11-20 11:37:18.535594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.186 [2024-11-20 11:37:18.535632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:81088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.186 [2024-11-20 11:37:18.535650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.186 [2024-11-20 11:37:18.535669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:81096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.186 [2024-11-20 11:37:18.535684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.186 [2024-11-20 11:37:18.535702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:81104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.186 [2024-11-20 11:37:18.535717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.186 [2024-11-20 11:37:18.535735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:81112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.186 [2024-11-20 11:37:18.535750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.186 [2024-11-20 11:37:18.535768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:81120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.186 [2024-11-20 11:37:18.535782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.186 [2024-11-20 11:37:18.535800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:81128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.186 [2024-11-20 11:37:18.535815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.186 [2024-11-20 11:37:18.535833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:81136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.186 [2024-11-20 11:37:18.535848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.186 [2024-11-20 11:37:18.535866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:81144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.186 [2024-11-20 11:37:18.535881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.186 [2024-11-20 11:37:18.535900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:81152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.186 [2024-11-20 11:37:18.535914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.186 [2024-11-20 11:37:18.535932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:81160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.186 [2024-11-20 11:37:18.535956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.186 [2024-11-20 11:37:18.535985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:81168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.186 [2024-11-20 11:37:18.536002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.186 [2024-11-20 11:37:18.536026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:81176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.186 [2024-11-20 11:37:18.536041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.186 [2024-11-20 11:37:18.536059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:81184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.186 [2024-11-20 11:37:18.536074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.186 [2024-11-20 11:37:18.536092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:81192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.186 [2024-11-20 11:37:18.536106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.186 [2024-11-20 11:37:18.536124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:81200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.186 [2024-11-20 11:37:18.536139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.186 [2024-11-20 11:37:18.536157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.186 [2024-11-20 11:37:18.536171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.186 [2024-11-20 11:37:18.536190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:81216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.186 [2024-11-20 11:37:18.536204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.186 [2024-11-20 11:37:18.536222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:81224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.186 [2024-11-20 11:37:18.536237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.186 [2024-11-20 11:37:18.536255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:81232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.186 [2024-11-20 11:37:18.536270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.186 [2024-11-20 11:37:18.536288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:81240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.186 [2024-11-20 11:37:18.536302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.186 [2024-11-20 11:37:18.536321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:81248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.186 [2024-11-20 11:37:18.536335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.186 [2024-11-20 11:37:18.536353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:81256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.186 [2024-11-20 11:37:18.536368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.186 [2024-11-20 11:37:18.536387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:81264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.186 [2024-11-20 11:37:18.536402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.186 [2024-11-20 11:37:18.536420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:81272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.186 [2024-11-20 11:37:18.536434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.186 [2024-11-20 11:37:18.536452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:81280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.186 [2024-11-20 11:37:18.536467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.186 [2024-11-20 11:37:18.536485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:81288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.186 [2024-11-20 11:37:18.536501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.186 [2024-11-20 11:37:18.536519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:81296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.186 [2024-11-20 11:37:18.536533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.186 [2024-11-20 11:37:18.536554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:81304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.186 [2024-11-20 11:37:18.536569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.186 [2024-11-20 11:37:18.536587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:81312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.186 [2024-11-20 11:37:18.536602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.186 [2024-11-20 11:37:18.536635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.186 [2024-11-20 11:37:18.536651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.187 [2024-11-20 11:37:18.536670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.187 [2024-11-20 11:37:18.536684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.187 [2024-11-20 11:37:18.536712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:81336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.187 [2024-11-20 11:37:18.536727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.187 [2024-11-20 11:37:18.536745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:81344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.187 [2024-11-20 11:37:18.536761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.187 [2024-11-20 11:37:18.536779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:81352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.187 [2024-11-20 11:37:18.536794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.187 [2024-11-20 11:37:18.536812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:81360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.187 [2024-11-20 11:37:18.536828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.187 [2024-11-20 11:37:18.536846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:81368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.187 [2024-11-20 11:37:18.536861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.187 [2024-11-20 11:37:18.536879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:81376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.187 [2024-11-20 11:37:18.536894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.187 [2024-11-20 11:37:18.536912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:81384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.187 [2024-11-20 11:37:18.536927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.187 [2024-11-20 11:37:18.536945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:81392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.187 [2024-11-20 11:37:18.536960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.187 [2024-11-20 11:37:18.536978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:81400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.187 [2024-11-20 11:37:18.536993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.187 [2024-11-20 11:37:18.537011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:81408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.187 [2024-11-20 11:37:18.537025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.187 [2024-11-20 11:37:18.537044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:81416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.187 [2024-11-20 11:37:18.537058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.187 [2024-11-20 11:37:18.537076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:81424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.187 [2024-11-20 11:37:18.537091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.187 [2024-11-20 11:37:18.537111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:81432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.187 [2024-11-20 11:37:18.537126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.187 [2024-11-20 11:37:18.537144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:81440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.187 [2024-11-20 11:37:18.537158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.187 [2024-11-20 11:37:18.537176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:81448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.187 [2024-11-20 11:37:18.537191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.187 [2024-11-20 11:37:18.537219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:81456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.187 [2024-11-20 11:37:18.537234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.187 [2024-11-20 11:37:18.537253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:81464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.187 [2024-11-20 11:37:18.537267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.187 [2024-11-20 11:37:18.537285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:81472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.187 [2024-11-20 11:37:18.537300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.187 [2024-11-20 11:37:18.537318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:81480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.187 [2024-11-20 11:37:18.537333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.187 [2024-11-20 11:37:18.537351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:81488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.187 [2024-11-20 11:37:18.537365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.187 [2024-11-20 11:37:18.537383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:81496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.187 [2024-11-20 11:37:18.537398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.187 [2024-11-20 11:37:18.537416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:81504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.187 [2024-11-20 11:37:18.537430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.187 [2024-11-20 11:37:18.537448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:81512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.187 [2024-11-20 11:37:18.537462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.187 [2024-11-20 11:37:18.537482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:81520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.187 [2024-11-20 11:37:18.537497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.187 [2024-11-20 11:37:18.537515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:81528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.187 [2024-11-20 11:37:18.537530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.187 [2024-11-20 11:37:18.537548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:81536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.187 [2024-11-20 11:37:18.537563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.187 [2024-11-20 11:37:18.537581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:81544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.187 [2024-11-20 11:37:18.537595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.187 [2024-11-20 11:37:18.537625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:81552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.187 [2024-11-20 11:37:18.537642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.187 [2024-11-20 11:37:18.537662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:81560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.187 [2024-11-20 11:37:18.537679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.187 [2024-11-20 11:37:18.537697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:81568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.188 [2024-11-20 11:37:18.537712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.188 [2024-11-20 11:37:18.537736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:81576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.188 [2024-11-20 11:37:18.537751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.188 [2024-11-20 11:37:18.537769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:81584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.188 [2024-11-20 11:37:18.537783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.188 [2024-11-20 11:37:18.537822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:81592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.188 [2024-11-20 11:37:18.537838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.188 [2024-11-20 11:37:18.537856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:81600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.188 [2024-11-20 11:37:18.537871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.188 [2024-11-20 11:37:18.537889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:81608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.188 [2024-11-20 11:37:18.537903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.188 [2024-11-20 11:37:18.537921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:81616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.188 [2024-11-20 11:37:18.537936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.188 [2024-11-20 11:37:18.537954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:81624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.188 [2024-11-20 11:37:18.537969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.188 [2024-11-20 11:37:18.537995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.188 [2024-11-20 11:37:18.538010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.188 [2024-11-20 11:37:18.538027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:81640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.188 [2024-11-20 11:37:18.538042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.188 [2024-11-20 11:37:18.538060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:81648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.188 [2024-11-20 11:37:18.538075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.188 [2024-11-20 11:37:18.538093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.188 [2024-11-20 11:37:18.538107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.188 [2024-11-20 11:37:18.538127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:81664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.188 [2024-11-20 11:37:18.538141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.188 [2024-11-20 11:37:18.538159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:81672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.188 [2024-11-20 11:37:18.538174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.188 [2024-11-20 11:37:18.538192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:81680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.188 [2024-11-20 11:37:18.538207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.188 [2024-11-20 11:37:18.538227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:81688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.188 [2024-11-20 11:37:18.538241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.188 [2024-11-20 11:37:18.538259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:81696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.188 [2024-11-20 11:37:18.538274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.188 [2024-11-20 11:37:18.538291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:81704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.188 [2024-11-20 11:37:18.538306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.188 [2024-11-20 11:37:18.538324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:81712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.188 [2024-11-20 11:37:18.538339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.188 [2024-11-20 11:37:18.538357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:81720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.188 [2024-11-20 11:37:18.538371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.188 [2024-11-20 11:37:18.538389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:81728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.188 [2024-11-20 11:37:18.538403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.188 [2024-11-20 11:37:18.538421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:81736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.188 [2024-11-20 11:37:18.538435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.188 [2024-11-20 11:37:18.538453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:81744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.188 [2024-11-20 11:37:18.538468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.188 [2024-11-20 11:37:18.538486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:81752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.188 [2024-11-20 11:37:18.538500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.188 [2024-11-20 11:37:18.538520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:81760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.188 [2024-11-20 11:37:18.538536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.188 [2024-11-20 11:37:18.538554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:81768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.188 [2024-11-20 11:37:18.538568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.188 [2024-11-20 11:37:18.538586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:81776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.188 [2024-11-20 11:37:18.538601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.188 [2024-11-20 11:37:18.538632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:81784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.188 [2024-11-20 11:37:18.538649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.188 [2024-11-20 11:37:18.538667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:81792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.188 [2024-11-20 11:37:18.538682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.188 [2024-11-20 11:37:18.538700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.188 [2024-11-20 11:37:18.538714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.188 [2024-11-20 11:37:18.538732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:81808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.188 [2024-11-20 11:37:18.538747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.188 [2024-11-20 11:37:18.538767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:81816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.188 [2024-11-20 11:37:18.538782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.188 [2024-11-20 11:37:18.538799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:81824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.189 [2024-11-20 11:37:18.538814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.189 [2024-11-20 11:37:18.538832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:81832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.189 [2024-11-20 11:37:18.538846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.189 [2024-11-20 11:37:18.538864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:81840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.189 [2024-11-20 11:37:18.538879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.189 [2024-11-20 11:37:18.538896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:81848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.189 [2024-11-20 11:37:18.538911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.189 [2024-11-20 11:37:18.538929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:81856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.189 [2024-11-20 11:37:18.538943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.189 [2024-11-20 11:37:18.538962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:81864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.189 [2024-11-20 11:37:18.538977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.189 [2024-11-20 11:37:18.538994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:81872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.189 [2024-11-20 11:37:18.539009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.189 [2024-11-20 11:37:18.539027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:81880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.189 [2024-11-20 11:37:18.539041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.189 [2024-11-20 11:37:18.539061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:81888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.189 [2024-11-20 11:37:18.539076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.189 [2024-11-20 11:37:18.539094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:81896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.189 [2024-11-20 11:37:18.539108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.189 [2024-11-20 11:37:18.539126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:81904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.189 [2024-11-20 11:37:18.539140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.189 [2024-11-20 11:37:18.539158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:81912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.189 [2024-11-20 11:37:18.539173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.189 [2024-11-20 11:37:18.539191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:81920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.189 [2024-11-20 11:37:18.539206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.189 [2024-11-20 11:37:18.539223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:81928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.189 [2024-11-20 11:37:18.539238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.189 [2024-11-20 11:37:18.539256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:81936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.189 [2024-11-20 11:37:18.539270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.189 [2024-11-20 11:37:18.539291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:81944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.189 [2024-11-20 11:37:18.539305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.189 [2024-11-20 11:37:18.539323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:81952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.189 [2024-11-20 11:37:18.539338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.189 [2024-11-20 11:37:18.539355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:81960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.189 [2024-11-20 11:37:18.539371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.189 [2024-11-20 11:37:18.539389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:81968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.189 [2024-11-20 11:37:18.539403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.189 [2024-11-20 11:37:18.539455] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.189 [2024-11-20 11:37:18.539470] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.189 [2024-11-20 11:37:18.539484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81976 len:8 PRP1 0x0 PRP2 0x0 00:24:33.189 [2024-11-20 11:37:18.539499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.189 [2024-11-20 11:37:18.540555] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:33.189 [2024-11-20 11:37:18.540666] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d1f50 (9): Bad file descriptor 00:24:33.189 [2024-11-20 11:37:18.540771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.189 [2024-11-20 11:37:18.540793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d1f50 with addr=10.0.0.3, port=4420 00:24:33.189 [2024-11-20 11:37:18.540805] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d1f50 is same with the state(6) to be set 00:24:33.189 [2024-11-20 11:37:18.540823] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d1f50 (9): Bad file descriptor 00:24:33.189 [2024-11-20 11:37:18.540840] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:33.189 [2024-11-20 11:37:18.540849] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:33.189 [2024-11-20 11:37:18.540860] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:33.189 [2024-11-20 11:37:18.540871] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:33.189 [2024-11-20 11:37:18.540882] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:33.189 11:37:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:24:34.123 5060.00 IOPS, 19.77 MiB/s [2024-11-20T11:37:19.712Z] [2024-11-20 11:37:19.541020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.123 [2024-11-20 11:37:19.541080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d1f50 with addr=10.0.0.3, port=4420 00:24:34.123 [2024-11-20 11:37:19.541097] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d1f50 is same with the state(6) to be set 00:24:34.123 [2024-11-20 11:37:19.541124] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d1f50 (9): Bad file descriptor 00:24:34.123 [2024-11-20 11:37:19.541144] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:34.123 [2024-11-20 11:37:19.541154] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:34.123 [2024-11-20 11:37:19.541166] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:34.123 [2024-11-20 11:37:19.541178] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:34.123 [2024-11-20 11:37:19.541190] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:34.123 11:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:34.380 [2024-11-20 11:37:19.864309] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:34.380 11:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 97080 00:24:35.204 3373.33 IOPS, 13.18 MiB/s [2024-11-20T11:37:20.793Z] [2024-11-20 11:37:20.552049] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:24:37.145 2530.00 IOPS, 9.88 MiB/s [2024-11-20T11:37:23.669Z] 3446.20 IOPS, 13.46 MiB/s [2024-11-20T11:37:24.603Z] 4339.67 IOPS, 16.95 MiB/s [2024-11-20T11:37:25.538Z] 4989.29 IOPS, 19.49 MiB/s [2024-11-20T11:37:26.474Z] 5490.12 IOPS, 21.45 MiB/s [2024-11-20T11:37:27.410Z] 5867.11 IOPS, 22.92 MiB/s [2024-11-20T11:37:27.410Z] 6171.70 IOPS, 24.11 MiB/s 00:24:41.821 Latency(us) 00:24:41.821 [2024-11-20T11:37:27.410Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:41.821 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:41.821 Verification LBA range: start 0x0 length 0x4000 00:24:41.821 NVMe0n1 : 10.01 6175.91 24.12 0.00 0.00 20691.71 1980.97 3035150.89 00:24:41.821 [2024-11-20T11:37:27.410Z] =================================================================================================================== 00:24:41.821 [2024-11-20T11:37:27.410Z] Total : 6175.91 24.12 0.00 0.00 20691.71 1980.97 3035150.89 00:24:41.821 { 00:24:41.821 "results": [ 00:24:41.821 { 00:24:41.821 "job": "NVMe0n1", 00:24:41.821 "core_mask": "0x4", 00:24:41.821 "workload": "verify", 00:24:41.821 "status": "finished", 00:24:41.821 "verify_range": { 00:24:41.821 "start": 0, 00:24:41.821 "length": 16384 00:24:41.821 }, 00:24:41.821 "queue_depth": 128, 00:24:41.821 "io_size": 4096, 00:24:41.821 "runtime": 10.00873, 00:24:41.821 "iops": 6175.908431938918, 00:24:41.821 "mibps": 24.124642312261397, 00:24:41.821 "io_failed": 0, 00:24:41.821 "io_timeout": 0, 00:24:41.821 "avg_latency_us": 20691.706495162092, 00:24:41.821 "min_latency_us": 1980.9745454545455, 00:24:41.821 "max_latency_us": 3035150.8945454545 00:24:41.821 } 00:24:41.821 ], 00:24:41.821 "core_count": 1 00:24:41.821 } 00:24:41.821 11:37:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=97198 00:24:41.821 11:37:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:41.821 11:37:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:24:42.079 Running I/O for 10 seconds... 00:24:43.014 11:37:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:43.274 8712.00 IOPS, 34.03 MiB/s [2024-11-20T11:37:28.863Z] [2024-11-20 11:37:28.640314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3fe90 is same with the state(6) to be set 00:24:43.274 [2024-11-20 11:37:28.640370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3fe90 is same with the state(6) to be set 00:24:43.274 [2024-11-20 11:37:28.640383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3fe90 is same with the state(6) to be set 00:24:43.274 [2024-11-20 11:37:28.640392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3fe90 is same with the state(6) to be set 00:24:43.274 [2024-11-20 11:37:28.640400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3fe90 is same with the state(6) to be set 00:24:43.274 [2024-11-20 11:37:28.640409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3fe90 is same with the state(6) to be set 00:24:43.274 [2024-11-20 11:37:28.640417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3fe90 is same with the state(6) to be set 00:24:43.274 [2024-11-20 11:37:28.640426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3fe90 is same with the state(6) to be set 00:24:43.274 [2024-11-20 11:37:28.640435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3fe90 is same with the state(6) to be set 00:24:43.274 [2024-11-20 11:37:28.640443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3fe90 is same with the state(6) to be set 00:24:43.274 [2024-11-20 11:37:28.640451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3fe90 is same with the state(6) to be set 00:24:43.274 [2024-11-20 11:37:28.640459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3fe90 is same with the state(6) to be set 00:24:43.274 [2024-11-20 11:37:28.640468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3fe90 is same with the state(6) to be set 00:24:43.274 [2024-11-20 11:37:28.640476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3fe90 is same with the state(6) to be set 00:24:43.274 [2024-11-20 11:37:28.640484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3fe90 is same with the state(6) to be set 00:24:43.274 [2024-11-20 11:37:28.642292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:78936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.274 [2024-11-20 11:37:28.642338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.274 [2024-11-20 11:37:28.642361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:78944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.274 [2024-11-20 11:37:28.642374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.274 [2024-11-20 11:37:28.642386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:78952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.274 [2024-11-20 11:37:28.642396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.275 [2024-11-20 11:37:28.642407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:78960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.275 [2024-11-20 11:37:28.642416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.275 [2024-11-20 11:37:28.642685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:78968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.275 [2024-11-20 11:37:28.642722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.275 [2024-11-20 11:37:28.642737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:78976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.275 [2024-11-20 11:37:28.642747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.275 [2024-11-20 11:37:28.642759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:78984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.275 [2024-11-20 11:37:28.642769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.275 [2024-11-20 11:37:28.642781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:78992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.275 [2024-11-20 11:37:28.642791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.275 [2024-11-20 11:37:28.642803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:79000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.275 [2024-11-20 11:37:28.642812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.275 [2024-11-20 11:37:28.642824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.275 [2024-11-20 11:37:28.642961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.275 [2024-11-20 11:37:28.642976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:79200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.275 [2024-11-20 11:37:28.642986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.275 [2024-11-20 11:37:28.643280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:79208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.275 [2024-11-20 11:37:28.643306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.275 [2024-11-20 11:37:28.643320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:79216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.275 [2024-11-20 11:37:28.643329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.275 [2024-11-20 11:37:28.643341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.275 [2024-11-20 11:37:28.643350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.275 [2024-11-20 11:37:28.643361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:79232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.275 [2024-11-20 11:37:28.643371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.275 [2024-11-20 11:37:28.643382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:79240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.275 [2024-11-20 11:37:28.643391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.275 [2024-11-20 11:37:28.643535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:79248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.275 [2024-11-20 11:37:28.643806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.275 [2024-11-20 11:37:28.643834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:79256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.275 [2024-11-20 11:37:28.643846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.275 [2024-11-20 11:37:28.643858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:79264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.275 [2024-11-20 11:37:28.643867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.275 [2024-11-20 11:37:28.643879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:79272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.275 [2024-11-20 11:37:28.643888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.275 [2024-11-20 11:37:28.643900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:79280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.275 [2024-11-20 11:37:28.643909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.275 [2024-11-20 11:37:28.643920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.275 [2024-11-20 11:37:28.643930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.275 [2024-11-20 11:37:28.644302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:79296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.275 [2024-11-20 11:37:28.644327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.275 [2024-11-20 11:37:28.644340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:79304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.275 [2024-11-20 11:37:28.644350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.275 [2024-11-20 11:37:28.644362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.275 [2024-11-20 11:37:28.644371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.275 [2024-11-20 11:37:28.644382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:79320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.275 [2024-11-20 11:37:28.644392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.275 [2024-11-20 11:37:28.644403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:79328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.275 [2024-11-20 11:37:28.644412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.275 [2024-11-20 11:37:28.644423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:79336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.275 [2024-11-20 11:37:28.644540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.275 [2024-11-20 11:37:28.644556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:79344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.275 [2024-11-20 11:37:28.644566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.275 [2024-11-20 11:37:28.644577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:79352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.275 [2024-11-20 11:37:28.644587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.275 [2024-11-20 11:37:28.644598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:79360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.276 [2024-11-20 11:37:28.644996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.276 [2024-11-20 11:37:28.645013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:79368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.276 [2024-11-20 11:37:28.645024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.276 [2024-11-20 11:37:28.645035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:79376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.276 [2024-11-20 11:37:28.645046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.276 [2024-11-20 11:37:28.645057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:79384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.276 [2024-11-20 11:37:28.645066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.276 [2024-11-20 11:37:28.645079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:79392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.276 [2024-11-20 11:37:28.645089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.276 [2024-11-20 11:37:28.645212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:79400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.276 [2024-11-20 11:37:28.645225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.276 [2024-11-20 11:37:28.645237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:79408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.276 [2024-11-20 11:37:28.645380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.276 [2024-11-20 11:37:28.645397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:79416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.276 [2024-11-20 11:37:28.645407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.276 [2024-11-20 11:37:28.645830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:79424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.276 [2024-11-20 11:37:28.645948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.276 [2024-11-20 11:37:28.645965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:79432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.276 [2024-11-20 11:37:28.645976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.276 [2024-11-20 11:37:28.645987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:79440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.276 [2024-11-20 11:37:28.645997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.276 [2024-11-20 11:37:28.646008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:79448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.276 [2024-11-20 11:37:28.646017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.276 [2024-11-20 11:37:28.646029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:79456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.276 [2024-11-20 11:37:28.646038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.276 [2024-11-20 11:37:28.646150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:79464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.276 [2024-11-20 11:37:28.646167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.276 [2024-11-20 11:37:28.646342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:79472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.276 [2024-11-20 11:37:28.646447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.276 [2024-11-20 11:37:28.646462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:79480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.276 [2024-11-20 11:37:28.646472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.276 [2024-11-20 11:37:28.646484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:79488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.276 [2024-11-20 11:37:28.646736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.276 [2024-11-20 11:37:28.646761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:79496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.276 [2024-11-20 11:37:28.646772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.276 [2024-11-20 11:37:28.646784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:79504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.276 [2024-11-20 11:37:28.646793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.276 [2024-11-20 11:37:28.646964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:79512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.276 [2024-11-20 11:37:28.647065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.276 [2024-11-20 11:37:28.647079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:79520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.276 [2024-11-20 11:37:28.647089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.276 [2024-11-20 11:37:28.647100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:79528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.276 [2024-11-20 11:37:28.647110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.276 [2024-11-20 11:37:28.647415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:79536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.276 [2024-11-20 11:37:28.647513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.276 [2024-11-20 11:37:28.647529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:79544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.276 [2024-11-20 11:37:28.647539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.276 [2024-11-20 11:37:28.647550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:79552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.276 [2024-11-20 11:37:28.647560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.276 [2024-11-20 11:37:28.647572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:79560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.276 [2024-11-20 11:37:28.647581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.276 [2024-11-20 11:37:28.647592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:79568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.276 [2024-11-20 11:37:28.647602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.276 [2024-11-20 11:37:28.647862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:79576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.276 [2024-11-20 11:37:28.647881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.276 [2024-11-20 11:37:28.647894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:79584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.276 [2024-11-20 11:37:28.647903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.276 [2024-11-20 11:37:28.647914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:79592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.277 [2024-11-20 11:37:28.647924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.277 [2024-11-20 11:37:28.647936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:79600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.277 [2024-11-20 11:37:28.647945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.277 [2024-11-20 11:37:28.647956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:79608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.277 [2024-11-20 11:37:28.647965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.277 [2024-11-20 11:37:28.648087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:79616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.277 [2024-11-20 11:37:28.648101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.277 [2024-11-20 11:37:28.648396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:79624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.277 [2024-11-20 11:37:28.648408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.277 [2024-11-20 11:37:28.648420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:79632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.277 [2024-11-20 11:37:28.648429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.277 [2024-11-20 11:37:28.648442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:79640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.277 [2024-11-20 11:37:28.648451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.277 [2024-11-20 11:37:28.648463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:79648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.277 [2024-11-20 11:37:28.648472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.277 [2024-11-20 11:37:28.648719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:79656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.277 [2024-11-20 11:37:28.648742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.277 [2024-11-20 11:37:28.648755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:79664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.277 [2024-11-20 11:37:28.648764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.277 [2024-11-20 11:37:28.648776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:79672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.277 [2024-11-20 11:37:28.648785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.277 [2024-11-20 11:37:28.648797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:79680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.277 [2024-11-20 11:37:28.648806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.277 [2024-11-20 11:37:28.648817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:79688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.277 [2024-11-20 11:37:28.648827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.277 [2024-11-20 11:37:28.649073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:79696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.277 [2024-11-20 11:37:28.649086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.277 [2024-11-20 11:37:28.649098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:79704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.277 [2024-11-20 11:37:28.649107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.277 [2024-11-20 11:37:28.649145] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.277 [2024-11-20 11:37:28.649158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79712 len:8 PRP1 0x0 PRP2 0x0 00:24:43.277 [2024-11-20 11:37:28.649519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.277 [2024-11-20 11:37:28.649539] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.277 [2024-11-20 11:37:28.649547] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.277 [2024-11-20 11:37:28.649555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79720 len:8 PRP1 0x0 PRP2 0x0 00:24:43.277 [2024-11-20 11:37:28.649565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.277 [2024-11-20 11:37:28.649575] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.277 [2024-11-20 11:37:28.649583] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.277 [2024-11-20 11:37:28.649591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79728 len:8 PRP1 0x0 PRP2 0x0 00:24:43.277 [2024-11-20 11:37:28.649600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.277 [2024-11-20 11:37:28.649610] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.277 [2024-11-20 11:37:28.649893] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.277 [2024-11-20 11:37:28.650041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79736 len:8 PRP1 0x0 PRP2 0x0 00:24:43.277 [2024-11-20 11:37:28.650147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.277 [2024-11-20 11:37:28.650160] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.277 [2024-11-20 11:37:28.650168] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.277 [2024-11-20 11:37:28.650177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79744 len:8 PRP1 0x0 PRP2 0x0 00:24:43.277 [2024-11-20 11:37:28.650187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.277 [2024-11-20 11:37:28.650196] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.277 [2024-11-20 11:37:28.650204] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.277 [2024-11-20 11:37:28.650212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79752 len:8 PRP1 0x0 PRP2 0x0 00:24:43.277 [2024-11-20 11:37:28.650221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.277 [2024-11-20 11:37:28.650230] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.277 [2024-11-20 11:37:28.650357] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.277 [2024-11-20 11:37:28.650457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79760 len:8 PRP1 0x0 PRP2 0x0 00:24:43.277 [2024-11-20 11:37:28.650469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.277 [2024-11-20 11:37:28.650481] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.277 [2024-11-20 11:37:28.650488] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.277 [2024-11-20 11:37:28.650497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79768 len:8 PRP1 0x0 PRP2 0x0 00:24:43.277 [2024-11-20 11:37:28.650507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.277 [2024-11-20 11:37:28.650516] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.278 [2024-11-20 11:37:28.650664] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.278 [2024-11-20 11:37:28.650785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79776 len:8 PRP1 0x0 PRP2 0x0 00:24:43.278 [2024-11-20 11:37:28.650798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.278 [2024-11-20 11:37:28.650810] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.278 [2024-11-20 11:37:28.650818] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.278 [2024-11-20 11:37:28.650826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79784 len:8 PRP1 0x0 PRP2 0x0 00:24:43.278 [2024-11-20 11:37:28.650835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.278 [2024-11-20 11:37:28.650845] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.278 [2024-11-20 11:37:28.651102] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.278 [2024-11-20 11:37:28.651114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79792 len:8 PRP1 0x0 PRP2 0x0 00:24:43.278 [2024-11-20 11:37:28.651124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.278 [2024-11-20 11:37:28.651135] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.278 [2024-11-20 11:37:28.651143] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.278 [2024-11-20 11:37:28.651151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79800 len:8 PRP1 0x0 PRP2 0x0 00:24:43.278 [2024-11-20 11:37:28.651159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.278 [2024-11-20 11:37:28.651412] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.278 [2024-11-20 11:37:28.651439] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.278 [2024-11-20 11:37:28.651449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79808 len:8 PRP1 0x0 PRP2 0x0 00:24:43.278 [2024-11-20 11:37:28.651459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.278 [2024-11-20 11:37:28.651469] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.278 [2024-11-20 11:37:28.651477] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.278 [2024-11-20 11:37:28.651485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79816 len:8 PRP1 0x0 PRP2 0x0 00:24:43.278 [2024-11-20 11:37:28.651494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.278 [2024-11-20 11:37:28.651503] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.278 [2024-11-20 11:37:28.651639] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.278 [2024-11-20 11:37:28.651871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79824 len:8 PRP1 0x0 PRP2 0x0 00:24:43.278 [2024-11-20 11:37:28.651891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.278 [2024-11-20 11:37:28.651903] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.278 [2024-11-20 11:37:28.651910] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.278 [2024-11-20 11:37:28.651918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79832 len:8 PRP1 0x0 PRP2 0x0 00:24:43.278 [2024-11-20 11:37:28.651928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.278 [2024-11-20 11:37:28.651937] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.278 [2024-11-20 11:37:28.651945] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.278 [2024-11-20 11:37:28.651953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79840 len:8 PRP1 0x0 PRP2 0x0 00:24:43.278 [2024-11-20 11:37:28.651962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.278 [2024-11-20 11:37:28.652229] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.278 [2024-11-20 11:37:28.652375] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.278 [2024-11-20 11:37:28.652521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79848 len:8 PRP1 0x0 PRP2 0x0 00:24:43.278 [2024-11-20 11:37:28.652643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.278 [2024-11-20 11:37:28.652657] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.278 [2024-11-20 11:37:28.652665] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.278 [2024-11-20 11:37:28.652674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79856 len:8 PRP1 0x0 PRP2 0x0 00:24:43.278 [2024-11-20 11:37:28.652684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.278 [2024-11-20 11:37:28.652693] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.278 [2024-11-20 11:37:28.652701] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.278 [2024-11-20 11:37:28.652946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79864 len:8 PRP1 0x0 PRP2 0x0 00:24:43.278 [2024-11-20 11:37:28.652965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.278 [2024-11-20 11:37:28.652978] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.278 [2024-11-20 11:37:28.652986] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.278 [2024-11-20 11:37:28.652994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79872 len:8 PRP1 0x0 PRP2 0x0 00:24:43.278 [2024-11-20 11:37:28.653003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.278 [2024-11-20 11:37:28.653013] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.278 [2024-11-20 11:37:28.653020] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.278 [2024-11-20 11:37:28.653029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79880 len:8 PRP1 0x0 PRP2 0x0 00:24:43.278 [2024-11-20 11:37:28.653261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.278 [2024-11-20 11:37:28.653274] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.278 [2024-11-20 11:37:28.653282] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.278 [2024-11-20 11:37:28.653290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79888 len:8 PRP1 0x0 PRP2 0x0 00:24:43.278 [2024-11-20 11:37:28.653301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.278 [2024-11-20 11:37:28.653310] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.278 [2024-11-20 11:37:28.653318] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.278 [2024-11-20 11:37:28.653452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79896 len:8 PRP1 0x0 PRP2 0x0 00:24:43.278 [2024-11-20 11:37:28.653559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.278 [2024-11-20 11:37:28.653573] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.278 [2024-11-20 11:37:28.653581] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.278 [2024-11-20 11:37:28.653590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79904 len:8 PRP1 0x0 PRP2 0x0 00:24:43.279 [2024-11-20 11:37:28.653599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.279 [2024-11-20 11:37:28.653872] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.279 [2024-11-20 11:37:28.653891] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.279 [2024-11-20 11:37:28.653901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79912 len:8 PRP1 0x0 PRP2 0x0 00:24:43.279 [2024-11-20 11:37:28.653910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.279 [2024-11-20 11:37:28.653920] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.279 [2024-11-20 11:37:28.653927] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.279 [2024-11-20 11:37:28.653935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79920 len:8 PRP1 0x0 PRP2 0x0 00:24:43.279 [2024-11-20 11:37:28.653944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.279 [2024-11-20 11:37:28.653954] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.279 [2024-11-20 11:37:28.654185] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.279 [2024-11-20 11:37:28.654210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79928 len:8 PRP1 0x0 PRP2 0x0 00:24:43.279 [2024-11-20 11:37:28.654220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.279 [2024-11-20 11:37:28.654231] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.279 [2024-11-20 11:37:28.654239] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.279 [2024-11-20 11:37:28.654247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79936 len:8 PRP1 0x0 PRP2 0x0 00:24:43.279 [2024-11-20 11:37:28.654257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.279 [2024-11-20 11:37:28.654266] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.279 [2024-11-20 11:37:28.654273] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.279 [2024-11-20 11:37:28.654502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79944 len:8 PRP1 0x0 PRP2 0x0 00:24:43.279 [2024-11-20 11:37:28.654515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.279 [2024-11-20 11:37:28.654526] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.279 [2024-11-20 11:37:28.654533] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.279 [2024-11-20 11:37:28.654542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79952 len:8 PRP1 0x0 PRP2 0x0 00:24:43.279 [2024-11-20 11:37:28.654551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.279 [2024-11-20 11:37:28.654560] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.279 [2024-11-20 11:37:28.654860] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.279 [2024-11-20 11:37:28.654882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79008 len:8 PRP1 0x0 PRP2 0x0 00:24:43.279 [2024-11-20 11:37:28.654898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.279 [2024-11-20 11:37:28.654915] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.279 [2024-11-20 11:37:28.654923] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.279 [2024-11-20 11:37:28.654931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79016 len:8 PRP1 0x0 PRP2 0x0 00:24:43.279 [2024-11-20 11:37:28.654941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.279 [2024-11-20 11:37:28.655203] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.279 [2024-11-20 11:37:28.655213] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.279 [2024-11-20 11:37:28.655222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79024 len:8 PRP1 0x0 PRP2 0x0 00:24:43.279 [2024-11-20 11:37:28.655231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.279 [2024-11-20 11:37:28.655241] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.279 [2024-11-20 11:37:28.655248] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.279 [2024-11-20 11:37:28.655256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79032 len:8 PRP1 0x0 PRP2 0x0 00:24:43.279 [2024-11-20 11:37:28.656634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.279 [2024-11-20 11:37:28.656712] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.279 [2024-11-20 11:37:28.656734] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.279 [2024-11-20 11:37:28.657066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79040 len:8 PRP1 0x0 PRP2 0x0 00:24:43.279 [2024-11-20 11:37:28.657111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.279 [2024-11-20 11:37:28.657138] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.279 [2024-11-20 11:37:28.657155] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.279 [2024-11-20 11:37:28.657675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79048 len:8 PRP1 0x0 PRP2 0x0 00:24:43.279 [2024-11-20 11:37:28.657693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.279 [2024-11-20 11:37:28.657709] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.279 [2024-11-20 11:37:28.657718] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.279 [2024-11-20 11:37:28.657729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79056 len:8 PRP1 0x0 PRP2 0x0 00:24:43.279 [2024-11-20 11:37:28.658053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.279 [2024-11-20 11:37:28.658070] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.279 [2024-11-20 11:37:28.658081] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.279 [2024-11-20 11:37:28.658091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79064 len:8 PRP1 0x0 PRP2 0x0 00:24:43.279 [2024-11-20 11:37:28.658104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.279 [2024-11-20 11:37:28.658115] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.279 [2024-11-20 11:37:28.658241] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.279 [2024-11-20 11:37:28.658256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79072 len:8 PRP1 0x0 PRP2 0x0 00:24:43.279 [2024-11-20 11:37:28.658479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.279 [2024-11-20 11:37:28.658497] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.279 [2024-11-20 11:37:28.658507] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.280 [2024-11-20 11:37:28.658518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79080 len:8 PRP1 0x0 PRP2 0x0 00:24:43.280 [2024-11-20 11:37:28.658531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.280 [2024-11-20 11:37:28.658948] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.280 [2024-11-20 11:37:28.658972] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.280 [2024-11-20 11:37:28.658990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79088 len:8 PRP1 0x0 PRP2 0x0 00:24:43.280 [2024-11-20 11:37:28.659311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.280 [2024-11-20 11:37:28.659341] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.280 [2024-11-20 11:37:28.659368] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.280 [2024-11-20 11:37:28.659684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79096 len:8 PRP1 0x0 PRP2 0x0 00:24:43.280 [2024-11-20 11:37:28.659708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.280 [2024-11-20 11:37:28.659731] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.280 [2024-11-20 11:37:28.660018] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.280 [2024-11-20 11:37:28.660042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79104 len:8 PRP1 0x0 PRP2 0x0 00:24:43.280 [2024-11-20 11:37:28.660063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.280 [2024-11-20 11:37:28.660450] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.280 [2024-11-20 11:37:28.660490] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.280 [2024-11-20 11:37:28.660510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79112 len:8 PRP1 0x0 PRP2 0x0 00:24:43.280 [2024-11-20 11:37:28.660526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.280 [2024-11-20 11:37:28.660540] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.280 [2024-11-20 11:37:28.660863] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.280 [2024-11-20 11:37:28.660893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79120 len:8 PRP1 0x0 PRP2 0x0 00:24:43.280 [2024-11-20 11:37:28.660913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.280 [2024-11-20 11:37:28.660934] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.280 [2024-11-20 11:37:28.660946] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.280 [2024-11-20 11:37:28.660957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79128 len:8 PRP1 0x0 PRP2 0x0 00:24:43.280 [2024-11-20 11:37:28.661359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.280 [2024-11-20 11:37:28.661379] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.280 [2024-11-20 11:37:28.661390] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.280 [2024-11-20 11:37:28.661400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79136 len:8 PRP1 0x0 PRP2 0x0 00:24:43.280 [2024-11-20 11:37:28.661414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.280 [2024-11-20 11:37:28.661561] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.280 [2024-11-20 11:37:28.661955] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.280 [2024-11-20 11:37:28.661979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79144 len:8 PRP1 0x0 PRP2 0x0 00:24:43.280 [2024-11-20 11:37:28.661997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.280 [2024-11-20 11:37:28.662400] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.280 [2024-11-20 11:37:28.662437] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.280 [2024-11-20 11:37:28.662456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79152 len:8 PRP1 0x0 PRP2 0x0 00:24:43.280 [2024-11-20 11:37:28.662475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.280 [2024-11-20 11:37:28.662814] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.280 [2024-11-20 11:37:28.662844] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.280 [2024-11-20 11:37:28.662858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79160 len:8 PRP1 0x0 PRP2 0x0 00:24:43.280 [2024-11-20 11:37:28.662870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.280 [2024-11-20 11:37:28.662883] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.280 [2024-11-20 11:37:28.662892] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.280 [2024-11-20 11:37:28.662984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79168 len:8 PRP1 0x0 PRP2 0x0 00:24:43.280 [2024-11-20 11:37:28.663010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.280 [2024-11-20 11:37:28.663024] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.280 [2024-11-20 11:37:28.663033] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.280 [2024-11-20 11:37:28.663043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79176 len:8 PRP1 0x0 PRP2 0x0 00:24:43.280 [2024-11-20 11:37:28.663263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.280 [2024-11-20 11:37:28.663280] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.280 [2024-11-20 11:37:28.663290] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.280 [2024-11-20 11:37:28.663300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79184 len:8 PRP1 0x0 PRP2 0x0 00:24:43.280 [2024-11-20 11:37:28.663311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.280 [2024-11-20 11:37:28.663766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:43.280 [2024-11-20 11:37:28.663803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.280 [2024-11-20 11:37:28.663831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:43.281 [2024-11-20 11:37:28.663842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.281 [2024-11-20 11:37:28.663854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:43.281 [2024-11-20 11:37:28.663865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.281 [2024-11-20 11:37:28.663982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:43.281 [2024-11-20 11:37:28.663997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.281 [2024-11-20 11:37:28.664231] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d1f50 is same with the state(6) to be set 00:24:43.281 [2024-11-20 11:37:28.664727] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:24:43.281 [2024-11-20 11:37:28.664775] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d1f50 (9): Bad file descriptor 00:24:43.281 [2024-11-20 11:37:28.665084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.281 [2024-11-20 11:37:28.665125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d1f50 with addr=10.0.0.3, port=4420 00:24:43.281 [2024-11-20 11:37:28.665141] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d1f50 is same with the state(6) to be set 00:24:43.281 [2024-11-20 11:37:28.665166] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d1f50 (9): Bad file descriptor 00:24:43.281 [2024-11-20 11:37:28.665271] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:24:43.281 [2024-11-20 11:37:28.665286] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:24:43.281 [2024-11-20 11:37:28.665299] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:24:43.281 [2024-11-20 11:37:28.665312] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:24:43.281 [2024-11-20 11:37:28.665691] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:24:43.281 11:37:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:24:44.214 4933.50 IOPS, 19.27 MiB/s [2024-11-20T11:37:29.803Z] [2024-11-20 11:37:29.666167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.214 [2024-11-20 11:37:29.666243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d1f50 with addr=10.0.0.3, port=4420 00:24:44.214 [2024-11-20 11:37:29.666261] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d1f50 is same with the state(6) to be set 00:24:44.214 [2024-11-20 11:37:29.666289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d1f50 (9): Bad file descriptor 00:24:44.214 [2024-11-20 11:37:29.666311] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:24:44.214 [2024-11-20 11:37:29.666322] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:24:44.214 [2024-11-20 11:37:29.666334] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:24:44.214 [2024-11-20 11:37:29.666347] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:24:44.214 [2024-11-20 11:37:29.666358] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:24:45.150 3289.00 IOPS, 12.85 MiB/s [2024-11-20T11:37:30.739Z] [2024-11-20 11:37:30.666506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.150 [2024-11-20 11:37:30.666574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d1f50 with addr=10.0.0.3, port=4420 00:24:45.150 [2024-11-20 11:37:30.666592] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d1f50 is same with the state(6) to be set 00:24:45.150 [2024-11-20 11:37:30.666631] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d1f50 (9): Bad file descriptor 00:24:45.150 [2024-11-20 11:37:30.666654] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:24:45.150 [2024-11-20 11:37:30.666664] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:24:45.150 [2024-11-20 11:37:30.666676] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:24:45.150 [2024-11-20 11:37:30.666688] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:24:45.150 [2024-11-20 11:37:30.666699] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:24:46.160 2466.75 IOPS, 9.64 MiB/s [2024-11-20T11:37:31.749Z] [2024-11-20 11:37:31.669455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.160 [2024-11-20 11:37:31.669529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d1f50 with addr=10.0.0.3, port=4420 00:24:46.160 [2024-11-20 11:37:31.669547] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d1f50 is same with the state(6) to be set 00:24:46.160 [2024-11-20 11:37:31.670010] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d1f50 (9): Bad file descriptor 00:24:46.160 [2024-11-20 11:37:31.670476] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:24:46.160 [2024-11-20 11:37:31.670504] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:24:46.160 [2024-11-20 11:37:31.670517] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:24:46.160 [2024-11-20 11:37:31.670530] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:24:46.160 [2024-11-20 11:37:31.670543] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:24:46.160 11:37:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:46.418 [2024-11-20 11:37:31.990536] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:46.676 11:37:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 97198 00:24:47.191 1973.40 IOPS, 7.71 MiB/s [2024-11-20T11:37:32.780Z] [2024-11-20 11:37:32.701136] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 4] Resetting controller successful. 00:24:49.060 2820.17 IOPS, 11.02 MiB/s [2024-11-20T11:37:35.583Z] 3535.71 IOPS, 13.81 MiB/s [2024-11-20T11:37:36.518Z] 4177.38 IOPS, 16.32 MiB/s [2024-11-20T11:37:37.892Z] 4678.89 IOPS, 18.28 MiB/s [2024-11-20T11:37:37.892Z] 5080.80 IOPS, 19.85 MiB/s 00:24:52.303 Latency(us) 00:24:52.303 [2024-11-20T11:37:37.892Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:52.303 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:52.303 Verification LBA range: start 0x0 length 0x4000 00:24:52.303 NVMe0n1 : 10.01 5087.96 19.87 3491.13 0.00 14886.98 685.15 3035150.89 00:24:52.303 [2024-11-20T11:37:37.892Z] =================================================================================================================== 00:24:52.303 [2024-11-20T11:37:37.892Z] Total : 5087.96 19.87 3491.13 0.00 14886.98 0.00 3035150.89 00:24:52.303 { 00:24:52.303 "results": [ 00:24:52.303 { 00:24:52.303 "job": "NVMe0n1", 00:24:52.303 "core_mask": "0x4", 00:24:52.303 "workload": "verify", 00:24:52.303 "status": "finished", 00:24:52.303 "verify_range": { 00:24:52.303 "start": 0, 00:24:52.303 "length": 16384 00:24:52.303 }, 00:24:52.303 "queue_depth": 128, 00:24:52.303 "io_size": 4096, 00:24:52.303 "runtime": 10.011093, 00:24:52.303 "iops": 5087.955930486311, 00:24:52.303 "mibps": 19.874827853462154, 00:24:52.303 "io_failed": 34950, 00:24:52.303 "io_timeout": 0, 00:24:52.303 "avg_latency_us": 14886.982205672213, 00:24:52.303 "min_latency_us": 685.1490909090909, 00:24:52.303 "max_latency_us": 3035150.8945454545 00:24:52.303 } 00:24:52.303 ], 00:24:52.303 "core_count": 1 00:24:52.303 } 00:24:52.303 11:37:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 97052 00:24:52.303 11:37:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 97052 ']' 00:24:52.303 11:37:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 97052 00:24:52.303 11:37:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:24:52.303 11:37:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:52.303 11:37:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97052 00:24:52.303 killing process with pid 97052 00:24:52.303 Received shutdown signal, test time was about 10.000000 seconds 00:24:52.303 00:24:52.303 Latency(us) 00:24:52.303 [2024-11-20T11:37:37.892Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:52.303 [2024-11-20T11:37:37.892Z] =================================================================================================================== 00:24:52.303 [2024-11-20T11:37:37.892Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:52.303 11:37:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:52.303 11:37:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:52.303 11:37:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97052' 00:24:52.303 11:37:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 97052 00:24:52.303 11:37:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 97052 00:24:52.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:52.303 11:37:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=97325 00:24:52.303 11:37:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:24:52.303 11:37:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 97325 /var/tmp/bdevperf.sock 00:24:52.303 11:37:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 97325 ']' 00:24:52.303 11:37:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:52.303 11:37:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:52.303 11:37:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:52.303 11:37:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:52.303 11:37:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:52.303 [2024-11-20 11:37:37.805355] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:24:52.303 [2024-11-20 11:37:37.805448] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97325 ] 00:24:52.562 [2024-11-20 11:37:37.950724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:52.562 [2024-11-20 11:37:37.992257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:53.505 11:37:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:53.505 11:37:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:24:53.505 11:37:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=97353 00:24:53.505 11:37:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 97325 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:24:53.505 11:37:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:24:53.763 11:37:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:24:54.022 NVMe0n1 00:24:54.022 11:37:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:54.022 11:37:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=97405 00:24:54.022 11:37:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:24:54.281 Running I/O for 10 seconds... 00:24:55.215 11:37:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:55.477 16913.00 IOPS, 66.07 MiB/s [2024-11-20T11:37:41.066Z] [2024-11-20 11:37:40.870349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c42be0 is same with the state(6) to be set 00:24:55.477 [2024-11-20 11:37:40.870402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c42be0 is same with the state(6) to be set 00:24:55.478 [2024-11-20 11:37:40.870414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c42be0 is same with the state(6) to be set 00:24:55.478 [2024-11-20 11:37:40.870423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c42be0 is same with the state(6) to be set 00:24:55.478 [2024-11-20 11:37:40.870432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c42be0 is same with the state(6) to be set 00:24:55.478 [2024-11-20 11:37:40.870441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c42be0 is same with the state(6) to be set 00:24:55.478 [2024-11-20 11:37:40.870449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c42be0 is same with the state(6) to be set 00:24:55.478 [2024-11-20 11:37:40.870458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c42be0 is same with the state(6) to be set 00:24:55.478 [2024-11-20 11:37:40.870466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c42be0 is same with the state(6) to be set 00:24:55.478 [2024-11-20 11:37:40.870475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c42be0 is same with the state(6) to be set 00:24:55.478 [2024-11-20 11:37:40.870483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c42be0 is same with the state(6) to be set 00:24:55.478 [2024-11-20 11:37:40.870492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c42be0 is same with the state(6) to be set 00:24:55.478 [2024-11-20 11:37:40.870501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c42be0 is same with the state(6) to be set 00:24:55.478 [2024-11-20 11:37:40.870510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c42be0 is same with the state(6) to be set 00:24:55.478 [2024-11-20 11:37:40.870518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c42be0 is same with the state(6) to be set 00:24:55.478 [2024-11-20 11:37:40.870526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c42be0 is same with the state(6) to be set 00:24:55.478 [2024-11-20 11:37:40.870535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c42be0 is same with the state(6) to be set 00:24:55.478 [2024-11-20 11:37:40.870543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c42be0 is same with the state(6) to be set 00:24:55.478 [2024-11-20 11:37:40.870551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c42be0 is same with the state(6) to be set 00:24:55.478 [2024-11-20 11:37:40.870559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c42be0 is same with the state(6) to be set 00:24:55.478 [2024-11-20 11:37:40.870568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c42be0 is same with the state(6) to be set 00:24:55.478 [2024-11-20 11:37:40.870576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c42be0 is same with the state(6) to be set 00:24:55.478 [2024-11-20 11:37:40.871133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:72104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.478 [2024-11-20 11:37:40.871167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.478 [2024-11-20 11:37:40.871190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:111224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.478 [2024-11-20 11:37:40.871201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.478 [2024-11-20 11:37:40.871213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:88680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.478 [2024-11-20 11:37:40.871224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.478 [2024-11-20 11:37:40.871236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.478 [2024-11-20 11:37:40.871246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.478 [2024-11-20 11:37:40.871257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:29032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.478 [2024-11-20 11:37:40.871267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.478 [2024-11-20 11:37:40.871278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:18912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.478 [2024-11-20 11:37:40.871287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.478 [2024-11-20 11:37:40.871299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:83800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.478 [2024-11-20 11:37:40.871308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.478 [2024-11-20 11:37:40.871319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:27480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.478 [2024-11-20 11:37:40.871329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.478 [2024-11-20 11:37:40.871340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:98000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.478 [2024-11-20 11:37:40.871349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.478 [2024-11-20 11:37:40.871361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:88704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.478 [2024-11-20 11:37:40.871370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.478 [2024-11-20 11:37:40.871382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:109744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.478 [2024-11-20 11:37:40.871391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.478 [2024-11-20 11:37:40.871404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:69928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.478 [2024-11-20 11:37:40.871414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.478 [2024-11-20 11:37:40.871425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.478 [2024-11-20 11:37:40.871435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.478 [2024-11-20 11:37:40.871448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:79112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.478 [2024-11-20 11:37:40.871457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.478 [2024-11-20 11:37:40.871469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:71144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.478 [2024-11-20 11:37:40.871478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.478 [2024-11-20 11:37:40.871490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:97448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.478 [2024-11-20 11:37:40.871500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.478 [2024-11-20 11:37:40.871511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:115528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.478 [2024-11-20 11:37:40.871521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.478 [2024-11-20 11:37:40.871532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:68496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.478 [2024-11-20 11:37:40.871541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.478 [2024-11-20 11:37:40.871553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:107528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.478 [2024-11-20 11:37:40.871563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.478 [2024-11-20 11:37:40.871574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.478 [2024-11-20 11:37:40.871584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.478 [2024-11-20 11:37:40.871595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:106584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.478 [2024-11-20 11:37:40.871604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.478 [2024-11-20 11:37:40.871628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:98752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.479 [2024-11-20 11:37:40.871640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.479 [2024-11-20 11:37:40.871652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:114072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.479 [2024-11-20 11:37:40.871661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.479 [2024-11-20 11:37:40.871672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:33304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.479 [2024-11-20 11:37:40.871682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.479 [2024-11-20 11:37:40.871694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:32544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.479 [2024-11-20 11:37:40.871703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.479 [2024-11-20 11:37:40.871714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:89280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.479 [2024-11-20 11:37:40.871724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.479 [2024-11-20 11:37:40.871735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:106128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.479 [2024-11-20 11:37:40.871745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.479 [2024-11-20 11:37:40.871757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:43576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.479 [2024-11-20 11:37:40.871768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.479 [2024-11-20 11:37:40.871779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:105584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.479 [2024-11-20 11:37:40.871788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.479 [2024-11-20 11:37:40.871800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:103208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.479 [2024-11-20 11:37:40.871810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.479 [2024-11-20 11:37:40.871821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:116872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.479 [2024-11-20 11:37:40.871831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.479 [2024-11-20 11:37:40.871842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:128176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.479 [2024-11-20 11:37:40.871852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.479 [2024-11-20 11:37:40.871863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:17032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.479 [2024-11-20 11:37:40.871872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.479 [2024-11-20 11:37:40.871884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:40224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.479 [2024-11-20 11:37:40.871893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.479 [2024-11-20 11:37:40.871905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.479 [2024-11-20 11:37:40.871915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.479 [2024-11-20 11:37:40.871927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:114776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.479 [2024-11-20 11:37:40.871936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.479 [2024-11-20 11:37:40.871947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:121128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.479 [2024-11-20 11:37:40.871957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.479 [2024-11-20 11:37:40.871968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:107600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.479 [2024-11-20 11:37:40.871977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.479 [2024-11-20 11:37:40.871993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:104664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.479 [2024-11-20 11:37:40.872003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.479 [2024-11-20 11:37:40.872014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:9632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.479 [2024-11-20 11:37:40.872024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.479 [2024-11-20 11:37:40.872037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:103832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.479 [2024-11-20 11:37:40.872046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.479 [2024-11-20 11:37:40.872057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:87016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.479 [2024-11-20 11:37:40.872067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.479 [2024-11-20 11:37:40.872079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.479 [2024-11-20 11:37:40.872088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.479 [2024-11-20 11:37:40.872100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:58976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.479 [2024-11-20 11:37:40.872110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.479 [2024-11-20 11:37:40.872121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:51864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.479 [2024-11-20 11:37:40.872130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.479 [2024-11-20 11:37:40.872142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:51312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.479 [2024-11-20 11:37:40.872151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.479 [2024-11-20 11:37:40.872163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:2224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.479 [2024-11-20 11:37:40.872172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.479 [2024-11-20 11:37:40.872184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:130568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.479 [2024-11-20 11:37:40.872193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.479 [2024-11-20 11:37:40.872205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:47216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.479 [2024-11-20 11:37:40.872214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.479 [2024-11-20 11:37:40.872225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:24616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.479 [2024-11-20 11:37:40.872235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.480 [2024-11-20 11:37:40.872246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:95912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.480 [2024-11-20 11:37:40.872255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.480 [2024-11-20 11:37:40.872266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:61808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.480 [2024-11-20 11:37:40.872276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.480 [2024-11-20 11:37:40.872287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:64736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.480 [2024-11-20 11:37:40.872297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.480 [2024-11-20 11:37:40.872309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:76448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.480 [2024-11-20 11:37:40.872318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.480 [2024-11-20 11:37:40.872330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:68128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.480 [2024-11-20 11:37:40.872340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.480 [2024-11-20 11:37:40.872351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:114440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.480 [2024-11-20 11:37:40.872361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.480 [2024-11-20 11:37:40.872373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:33584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.480 [2024-11-20 11:37:40.872382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.480 [2024-11-20 11:37:40.872394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:121096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.480 [2024-11-20 11:37:40.872404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.480 [2024-11-20 11:37:40.872416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:76592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.480 [2024-11-20 11:37:40.872426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.480 [2024-11-20 11:37:40.872437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:3752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.480 [2024-11-20 11:37:40.872446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.480 [2024-11-20 11:37:40.872458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:54312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.480 [2024-11-20 11:37:40.872467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.480 [2024-11-20 11:37:40.872479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:77312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.480 [2024-11-20 11:37:40.872488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.480 [2024-11-20 11:37:40.872500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:108376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.480 [2024-11-20 11:37:40.872509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.480 [2024-11-20 11:37:40.872521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:123320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.480 [2024-11-20 11:37:40.872530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.480 [2024-11-20 11:37:40.872542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:83456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.480 [2024-11-20 11:37:40.872551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.480 [2024-11-20 11:37:40.872565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:19880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.480 [2024-11-20 11:37:40.872575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.480 [2024-11-20 11:37:40.872586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:74136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.480 [2024-11-20 11:37:40.872596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.480 [2024-11-20 11:37:40.872607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:58144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.480 [2024-11-20 11:37:40.872627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.480 [2024-11-20 11:37:40.872639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:76928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.480 [2024-11-20 11:37:40.872649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.480 [2024-11-20 11:37:40.872660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:126376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.480 [2024-11-20 11:37:40.872672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.480 [2024-11-20 11:37:40.872683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.480 [2024-11-20 11:37:40.872693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.480 [2024-11-20 11:37:40.872704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:71488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.480 [2024-11-20 11:37:40.872714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.480 [2024-11-20 11:37:40.872725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:92400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.480 [2024-11-20 11:37:40.872735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.480 [2024-11-20 11:37:40.872746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.480 [2024-11-20 11:37:40.872756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.480 [2024-11-20 11:37:40.872767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:18416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.480 [2024-11-20 11:37:40.872776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.480 [2024-11-20 11:37:40.872788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:117760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.480 [2024-11-20 11:37:40.872797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.480 [2024-11-20 11:37:40.872808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:6680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.480 [2024-11-20 11:37:40.872817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.480 [2024-11-20 11:37:40.872828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:72896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.480 [2024-11-20 11:37:40.872837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.481 [2024-11-20 11:37:40.872848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:54672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.481 [2024-11-20 11:37:40.872857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.481 [2024-11-20 11:37:40.873534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:100544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.481 [2024-11-20 11:37:40.873561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.481 [2024-11-20 11:37:40.873575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:117808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.481 [2024-11-20 11:37:40.873585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.481 [2024-11-20 11:37:40.873596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:32408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.481 [2024-11-20 11:37:40.873606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.481 [2024-11-20 11:37:40.873634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:83024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.481 [2024-11-20 11:37:40.873645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.481 [2024-11-20 11:37:40.873657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:97896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.481 [2024-11-20 11:37:40.873666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.481 [2024-11-20 11:37:40.873677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:10504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.481 [2024-11-20 11:37:40.873687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.481 [2024-11-20 11:37:40.873698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:116312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.481 [2024-11-20 11:37:40.873708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.481 [2024-11-20 11:37:40.873719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.481 [2024-11-20 11:37:40.873729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.481 [2024-11-20 11:37:40.873740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:120656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.481 [2024-11-20 11:37:40.873750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.481 [2024-11-20 11:37:40.873762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:34776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.481 [2024-11-20 11:37:40.873771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.481 [2024-11-20 11:37:40.873782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:78120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.481 [2024-11-20 11:37:40.873792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.481 [2024-11-20 11:37:40.873812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.481 [2024-11-20 11:37:40.873824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.481 [2024-11-20 11:37:40.873835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:26480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.481 [2024-11-20 11:37:40.873845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.481 [2024-11-20 11:37:40.873862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:130416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.481 [2024-11-20 11:37:40.873872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.481 [2024-11-20 11:37:40.873884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:3504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.481 [2024-11-20 11:37:40.873893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.481 [2024-11-20 11:37:40.873905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:91200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.481 [2024-11-20 11:37:40.873914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.481 [2024-11-20 11:37:40.873925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:48160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.481 [2024-11-20 11:37:40.873935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.481 [2024-11-20 11:37:40.873946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:11272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.481 [2024-11-20 11:37:40.873956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.481 [2024-11-20 11:37:40.873967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:68616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.481 [2024-11-20 11:37:40.873976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.481 [2024-11-20 11:37:40.873988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:126936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.481 [2024-11-20 11:37:40.873997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.481 [2024-11-20 11:37:40.874008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:52288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.481 [2024-11-20 11:37:40.874018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.481 [2024-11-20 11:37:40.874029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:13168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.481 [2024-11-20 11:37:40.874039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.481 [2024-11-20 11:37:40.874050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:52688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.481 [2024-11-20 11:37:40.874060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.481 [2024-11-20 11:37:40.874071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:19488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.481 [2024-11-20 11:37:40.874081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.481 [2024-11-20 11:37:40.874092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:34880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.481 [2024-11-20 11:37:40.874104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.482 [2024-11-20 11:37:40.874115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:90344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.482 [2024-11-20 11:37:40.874125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.482 [2024-11-20 11:37:40.874136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:22072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.482 [2024-11-20 11:37:40.874146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.482 [2024-11-20 11:37:40.874157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:33576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.482 [2024-11-20 11:37:40.874167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.482 [2024-11-20 11:37:40.874178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:109048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.482 [2024-11-20 11:37:40.874187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.482 [2024-11-20 11:37:40.874201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:73520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.482 [2024-11-20 11:37:40.874210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.482 [2024-11-20 11:37:40.874222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:14808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.482 [2024-11-20 11:37:40.874231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.482 [2024-11-20 11:37:40.874243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:91256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.482 [2024-11-20 11:37:40.874252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.482 [2024-11-20 11:37:40.874263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:115920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.482 [2024-11-20 11:37:40.874273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.482 [2024-11-20 11:37:40.874284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:34440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.482 [2024-11-20 11:37:40.874294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.482 [2024-11-20 11:37:40.874305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:110304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.482 [2024-11-20 11:37:40.874314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.482 [2024-11-20 11:37:40.874326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:54552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.482 [2024-11-20 11:37:40.874335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.482 [2024-11-20 11:37:40.874347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:71088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.482 [2024-11-20 11:37:40.874356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.482 [2024-11-20 11:37:40.874368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:76992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.482 [2024-11-20 11:37:40.874377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.482 [2024-11-20 11:37:40.874389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:81136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.482 [2024-11-20 11:37:40.874399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.482 [2024-11-20 11:37:40.874410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:98176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.482 [2024-11-20 11:37:40.874419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.482 [2024-11-20 11:37:40.874431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:62200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.482 [2024-11-20 11:37:40.874442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.482 [2024-11-20 11:37:40.874453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.482 [2024-11-20 11:37:40.874463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.482 [2024-11-20 11:37:40.874497] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:55.482 [2024-11-20 11:37:40.874510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93480 len:8 PRP1 0x0 PRP2 0x0 00:24:55.482 [2024-11-20 11:37:40.874519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.482 [2024-11-20 11:37:40.874532] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:55.482 [2024-11-20 11:37:40.874540] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:55.482 [2024-11-20 11:37:40.874548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91056 len:8 PRP1 0x0 PRP2 0x0 00:24:55.482 [2024-11-20 11:37:40.874560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.482 [2024-11-20 11:37:40.874569] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:55.482 [2024-11-20 11:37:40.874577] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:55.482 [2024-11-20 11:37:40.874584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105616 len:8 PRP1 0x0 PRP2 0x0 00:24:55.482 [2024-11-20 11:37:40.874594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.482 [2024-11-20 11:37:40.874603] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:55.482 [2024-11-20 11:37:40.874610] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:55.482 [2024-11-20 11:37:40.874632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73816 len:8 PRP1 0x0 PRP2 0x0 00:24:55.482 [2024-11-20 11:37:40.874642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.482 [2024-11-20 11:37:40.874652] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:55.482 [2024-11-20 11:37:40.874659] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:55.482 [2024-11-20 11:37:40.874667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113208 len:8 PRP1 0x0 PRP2 0x0 00:24:55.482 [2024-11-20 11:37:40.874676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.482 [2024-11-20 11:37:40.874686] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:55.482 [2024-11-20 11:37:40.874694] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:55.482 [2024-11-20 11:37:40.874702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51008 len:8 PRP1 0x0 PRP2 0x0 00:24:55.482 [2024-11-20 11:37:40.874711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.482 [2024-11-20 11:37:40.874721] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:55.482 [2024-11-20 11:37:40.874728] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:55.482 [2024-11-20 11:37:40.874736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:27608 len:8 PRP1 0x0 PRP2 0x0 00:24:55.482 [2024-11-20 11:37:40.874745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.482 [2024-11-20 11:37:40.875071] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:24:55.483 [2024-11-20 11:37:40.875150] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148df50 (9): Bad file descriptor 00:24:55.483 [2024-11-20 11:37:40.875264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.483 [2024-11-20 11:37:40.875287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148df50 with addr=10.0.0.3, port=4420 00:24:55.483 [2024-11-20 11:37:40.875299] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148df50 is same with the state(6) to be set 00:24:55.483 [2024-11-20 11:37:40.875317] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148df50 (9): Bad file descriptor 00:24:55.483 [2024-11-20 11:37:40.875334] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:24:55.483 [2024-11-20 11:37:40.875344] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:24:55.483 [2024-11-20 11:37:40.875355] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:24:55.483 [2024-11-20 11:37:40.875366] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:24:55.483 [2024-11-20 11:37:40.875377] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:24:55.483 11:37:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 97405 00:24:57.355 10194.50 IOPS, 39.82 MiB/s [2024-11-20T11:37:42.944Z] 6796.33 IOPS, 26.55 MiB/s [2024-11-20T11:37:42.944Z] [2024-11-20 11:37:42.875681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.355 [2024-11-20 11:37:42.875765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148df50 with addr=10.0.0.3, port=4420 00:24:57.355 [2024-11-20 11:37:42.875783] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148df50 is same with the state(6) to be set 00:24:57.355 [2024-11-20 11:37:42.875813] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148df50 (9): Bad file descriptor 00:24:57.355 [2024-11-20 11:37:42.875851] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:24:57.355 [2024-11-20 11:37:42.875864] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:24:57.355 [2024-11-20 11:37:42.875876] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:24:57.355 [2024-11-20 11:37:42.875888] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:24:57.356 [2024-11-20 11:37:42.875900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:24:59.285 5097.25 IOPS, 19.91 MiB/s [2024-11-20T11:37:45.133Z] 4077.80 IOPS, 15.93 MiB/s [2024-11-20T11:37:45.133Z] [2024-11-20 11:37:44.876185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.544 [2024-11-20 11:37:44.876279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148df50 with addr=10.0.0.3, port=4420 00:24:59.544 [2024-11-20 11:37:44.876307] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148df50 is same with the state(6) to be set 00:24:59.544 [2024-11-20 11:37:44.876347] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148df50 (9): Bad file descriptor 00:24:59.544 [2024-11-20 11:37:44.876379] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:24:59.544 [2024-11-20 11:37:44.876395] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:24:59.544 [2024-11-20 11:37:44.876406] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:24:59.544 [2024-11-20 11:37:44.876872] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:24:59.544 [2024-11-20 11:37:44.876929] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:25:01.484 3398.17 IOPS, 13.27 MiB/s [2024-11-20T11:37:47.073Z] 2912.71 IOPS, 11.38 MiB/s [2024-11-20T11:37:47.073Z] [2024-11-20 11:37:46.877034] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:25:01.484 [2024-11-20 11:37:46.877104] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:25:01.484 [2024-11-20 11:37:46.877126] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:25:01.484 [2024-11-20 11:37:46.877142] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state 00:25:01.484 [2024-11-20 11:37:46.877156] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:25:02.419 2548.62 IOPS, 9.96 MiB/s 00:25:02.419 Latency(us) 00:25:02.419 [2024-11-20T11:37:48.008Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:02.419 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:25:02.419 NVMe0n1 : 8.21 2483.80 9.70 15.59 0.00 51161.85 2546.97 7015926.69 00:25:02.419 [2024-11-20T11:37:48.008Z] =================================================================================================================== 00:25:02.419 [2024-11-20T11:37:48.008Z] Total : 2483.80 9.70 15.59 0.00 51161.85 2546.97 7015926.69 00:25:02.419 { 00:25:02.419 "results": [ 00:25:02.419 { 00:25:02.419 "job": "NVMe0n1", 00:25:02.419 "core_mask": "0x4", 00:25:02.419 "workload": "randread", 00:25:02.419 "status": "finished", 00:25:02.419 "queue_depth": 128, 00:25:02.419 "io_size": 4096, 00:25:02.419 "runtime": 8.208792, 00:25:02.419 "iops": 2483.8002960727963, 00:25:02.419 "mibps": 9.70234490653436, 00:25:02.419 "io_failed": 128, 00:25:02.419 "io_timeout": 0, 00:25:02.419 "avg_latency_us": 51161.854369990295, 00:25:02.419 "min_latency_us": 2546.9672727272728, 00:25:02.419 "max_latency_us": 7015926.69090909 00:25:02.419 } 00:25:02.419 ], 00:25:02.419 "core_count": 1 00:25:02.419 } 00:25:02.419 11:37:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:02.419 Attaching 5 probes... 00:25:02.419 1526.794986: reset bdev controller NVMe0 00:25:02.419 1526.928747: reconnect bdev controller NVMe0 00:25:02.419 3527.236356: reconnect delay bdev controller NVMe0 00:25:02.419 3527.266922: reconnect bdev controller NVMe0 00:25:02.419 5527.748145: reconnect delay bdev controller NVMe0 00:25:02.419 5527.774644: reconnect bdev controller NVMe0 00:25:02.419 7528.725720: reconnect delay bdev controller NVMe0 00:25:02.419 7528.752988: reconnect bdev controller NVMe0 00:25:02.419 11:37:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:25:02.419 11:37:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:25:02.419 11:37:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 97353 00:25:02.419 11:37:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:02.419 11:37:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 97325 00:25:02.419 11:37:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 97325 ']' 00:25:02.419 11:37:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 97325 00:25:02.419 11:37:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:25:02.419 11:37:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:02.419 11:37:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97325 00:25:02.419 killing process with pid 97325 00:25:02.419 Received shutdown signal, test time was about 8.273853 seconds 00:25:02.419 00:25:02.419 Latency(us) 00:25:02.419 [2024-11-20T11:37:48.008Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:02.419 [2024-11-20T11:37:48.008Z] =================================================================================================================== 00:25:02.419 [2024-11-20T11:37:48.008Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:02.419 11:37:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:25:02.419 11:37:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:25:02.419 11:37:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97325' 00:25:02.419 11:37:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 97325 00:25:02.419 11:37:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 97325 00:25:02.677 11:37:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:02.935 11:37:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:25:02.935 11:37:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:25:02.935 11:37:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:02.935 11:37:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:25:02.935 11:37:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:02.935 11:37:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:25:02.935 11:37:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:02.935 11:37:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:02.935 rmmod nvme_tcp 00:25:02.935 rmmod nvme_fabrics 00:25:02.935 rmmod nvme_keyring 00:25:03.195 11:37:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:03.195 11:37:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:25:03.195 11:37:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:25:03.195 11:37:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@517 -- # '[' -n 96781 ']' 00:25:03.195 11:37:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # killprocess 96781 00:25:03.195 11:37:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 96781 ']' 00:25:03.195 11:37:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 96781 00:25:03.195 11:37:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:25:03.195 11:37:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:03.195 11:37:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96781 00:25:03.195 killing process with pid 96781 00:25:03.195 11:37:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:03.195 11:37:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:03.195 11:37:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96781' 00:25:03.195 11:37:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 96781 00:25:03.195 11:37:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 96781 00:25:03.195 11:37:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:03.195 11:37:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:03.195 11:37:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:03.195 11:37:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:25:03.195 11:37:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-save 00:25:03.195 11:37:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:03.195 11:37:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:25:03.195 11:37:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:03.195 11:37:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:25:03.195 11:37:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:25:03.195 11:37:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:25:03.195 11:37:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:25:03.195 11:37:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:25:03.454 11:37:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:25:03.454 11:37:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:25:03.454 11:37:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:25:03.454 11:37:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:25:03.454 11:37:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:25:03.454 11:37:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:25:03.454 11:37:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:25:03.454 11:37:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:03.454 11:37:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:03.454 11:37:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:25:03.454 11:37:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:03.454 11:37:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:03.454 11:37:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:03.454 11:37:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:25:03.454 00:25:03.454 real 0m46.528s 00:25:03.454 user 2m17.763s 00:25:03.454 sys 0m4.699s 00:25:03.454 11:37:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:03.454 11:37:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:03.454 ************************************ 00:25:03.454 END TEST nvmf_timeout 00:25:03.454 ************************************ 00:25:03.454 11:37:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:25:03.454 11:37:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:25:03.454 00:25:03.454 real 5m42.914s 00:25:03.454 user 14m50.176s 00:25:03.454 sys 1m1.813s 00:25:03.454 11:37:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:03.454 11:37:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.454 ************************************ 00:25:03.454 END TEST nvmf_host 00:25:03.454 ************************************ 00:25:03.454 11:37:49 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:25:03.454 11:37:49 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:25:03.454 11:37:49 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:25:03.454 11:37:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:25:03.454 11:37:49 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:03.454 11:37:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:03.454 ************************************ 00:25:03.454 START TEST nvmf_target_core_interrupt_mode 00:25:03.454 ************************************ 00:25:03.454 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:25:03.713 * Looking for test storage... 00:25:03.713 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:25:03.713 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:03.713 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:25:03.713 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:03.713 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:03.713 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:03.713 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:03.713 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:03.713 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:25:03.713 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:25:03.713 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:25:03.713 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:25:03.713 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:25:03.713 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:25:03.713 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:25:03.713 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:03.713 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:25:03.713 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:25:03.713 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:03.713 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:03.713 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:25:03.713 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:25:03.713 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:03.713 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:25:03.713 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:25:03.713 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:25:03.713 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:25:03.713 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:03.713 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:25:03.713 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:25:03.713 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:03.713 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:03.713 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:25:03.713 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:03.713 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:03.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.713 --rc genhtml_branch_coverage=1 00:25:03.713 --rc genhtml_function_coverage=1 00:25:03.713 --rc genhtml_legend=1 00:25:03.713 --rc geninfo_all_blocks=1 00:25:03.714 --rc geninfo_unexecuted_blocks=1 00:25:03.714 00:25:03.714 ' 00:25:03.714 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:03.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.714 --rc genhtml_branch_coverage=1 00:25:03.714 --rc genhtml_function_coverage=1 00:25:03.714 --rc genhtml_legend=1 00:25:03.714 --rc geninfo_all_blocks=1 00:25:03.714 --rc geninfo_unexecuted_blocks=1 00:25:03.714 00:25:03.714 ' 00:25:03.714 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:03.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.714 --rc genhtml_branch_coverage=1 00:25:03.714 --rc genhtml_function_coverage=1 00:25:03.714 --rc genhtml_legend=1 00:25:03.714 --rc geninfo_all_blocks=1 00:25:03.714 --rc geninfo_unexecuted_blocks=1 00:25:03.714 00:25:03.714 ' 00:25:03.714 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:03.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.714 --rc genhtml_branch_coverage=1 00:25:03.714 --rc genhtml_function_coverage=1 00:25:03.714 --rc genhtml_legend=1 00:25:03.714 --rc geninfo_all_blocks=1 00:25:03.714 --rc geninfo_unexecuted_blocks=1 00:25:03.714 00:25:03.714 ' 00:25:03.714 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:25:03.714 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:25:03.714 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:03.714 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:25:03.714 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:03.714 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:03.714 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:03.714 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:03.714 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:03.714 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:03.714 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:03.714 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:03.714 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:03.714 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:03.714 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:25:03.714 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=806d442c-08ba-4719-b76d-33169283459a 00:25:03.714 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:03.714 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:03.714 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:03.714 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:03.714 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:03.714 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:25:03.714 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:03.714 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:03.714 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:03.714 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.714 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.714 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.714 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:25:03.714 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.714 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:25:03.714 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:03.714 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:03.714 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:03.714 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:03.714 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:03.714 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:25:03.714 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:25:03.714 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:03.714 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:03.714 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:03.714 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:25:03.714 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:25:03.714 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:25:03.714 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:25:03.714 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:25:03.714 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:03.714 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:25:03.714 ************************************ 00:25:03.714 START TEST nvmf_abort 00:25:03.714 ************************************ 00:25:03.714 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:25:03.975 * Looking for test storage... 00:25:03.975 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:03.975 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:03.975 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:25:03.975 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:03.975 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:03.975 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:03.975 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:03.975 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:03.975 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:25:03.975 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:25:03.975 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:25:03.975 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:25:03.975 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:25:03.975 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:25:03.975 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:25:03.975 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:03.975 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:25:03.975 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:25:03.976 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:03.976 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:03.976 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:25:03.976 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:25:03.976 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:03.976 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:25:03.976 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:25:03.976 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:25:03.976 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:25:03.976 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:03.976 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:25:03.976 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:25:03.976 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:03.976 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:03.976 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:25:03.976 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:03.976 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:03.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.976 --rc genhtml_branch_coverage=1 00:25:03.976 --rc genhtml_function_coverage=1 00:25:03.976 --rc genhtml_legend=1 00:25:03.976 --rc geninfo_all_blocks=1 00:25:03.976 --rc geninfo_unexecuted_blocks=1 00:25:03.976 00:25:03.976 ' 00:25:03.976 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:03.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.976 --rc genhtml_branch_coverage=1 00:25:03.976 --rc genhtml_function_coverage=1 00:25:03.976 --rc genhtml_legend=1 00:25:03.976 --rc geninfo_all_blocks=1 00:25:03.976 --rc geninfo_unexecuted_blocks=1 00:25:03.976 00:25:03.976 ' 00:25:03.976 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:03.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.976 --rc genhtml_branch_coverage=1 00:25:03.976 --rc genhtml_function_coverage=1 00:25:03.976 --rc genhtml_legend=1 00:25:03.976 --rc geninfo_all_blocks=1 00:25:03.976 --rc geninfo_unexecuted_blocks=1 00:25:03.976 00:25:03.976 ' 00:25:03.976 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:03.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.976 --rc genhtml_branch_coverage=1 00:25:03.976 --rc genhtml_function_coverage=1 00:25:03.976 --rc genhtml_legend=1 00:25:03.976 --rc geninfo_all_blocks=1 00:25:03.976 --rc geninfo_unexecuted_blocks=1 00:25:03.976 00:25:03.976 ' 00:25:03.976 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:03.976 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:25:03.976 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:03.976 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:03.976 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:03.976 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:03.976 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:03.976 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:03.976 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:03.976 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:03.976 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:03.976 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:03.976 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:25:03.976 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=806d442c-08ba-4719-b76d-33169283459a 00:25:03.976 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:03.976 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:03.976 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:03.976 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:03.976 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:03.976 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:25:03.976 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:03.976 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:03.976 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:03.976 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.976 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.976 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.976 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:25:03.976 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.976 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:25:03.976 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:03.976 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:03.976 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:03.976 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:03.976 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:03.976 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:25:03.976 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:25:03.976 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:03.976 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:03.977 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:03.977 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:03.977 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:25:03.977 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:25:03.977 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:03.977 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:03.977 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:03.977 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:03.977 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:03.977 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:03.977 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:03.977 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:03.977 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:25:03.977 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:25:03.977 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:25:03.977 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:25:03.977 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:25:03.977 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@460 -- # nvmf_veth_init 00:25:03.977 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:03.977 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:25:03.977 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:25:03.977 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:03.977 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:03.977 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:25:03.977 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:03.977 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:25:03.977 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:03.977 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:25:03.977 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:03.977 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:03.977 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:03.977 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:03.977 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:03.977 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:03.977 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:25:03.977 Cannot find device "nvmf_init_br" 00:25:03.977 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@162 -- # true 00:25:03.977 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:25:03.977 Cannot find device "nvmf_init_br2" 00:25:03.977 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@163 -- # true 00:25:03.977 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:25:03.977 Cannot find device "nvmf_tgt_br" 00:25:03.977 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@164 -- # true 00:25:03.977 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:25:03.977 Cannot find device "nvmf_tgt_br2" 00:25:03.977 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@165 -- # true 00:25:03.977 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:25:03.977 Cannot find device "nvmf_init_br" 00:25:03.977 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@166 -- # true 00:25:03.977 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:25:03.977 Cannot find device "nvmf_init_br2" 00:25:03.977 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@167 -- # true 00:25:03.977 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:25:03.977 Cannot find device "nvmf_tgt_br" 00:25:03.977 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@168 -- # true 00:25:03.977 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:25:03.977 Cannot find device "nvmf_tgt_br2" 00:25:03.977 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@169 -- # true 00:25:03.977 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:25:04.237 Cannot find device "nvmf_br" 00:25:04.237 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@170 -- # true 00:25:04.237 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:25:04.237 Cannot find device "nvmf_init_if" 00:25:04.237 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@171 -- # true 00:25:04.237 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:25:04.237 Cannot find device "nvmf_init_if2" 00:25:04.237 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@172 -- # true 00:25:04.237 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:04.237 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:04.237 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@173 -- # true 00:25:04.237 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:04.237 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:04.237 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@174 -- # true 00:25:04.237 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:25:04.237 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:04.237 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:25:04.237 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:04.237 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:04.237 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:04.237 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:04.237 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:04.237 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:25:04.237 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:25:04.237 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:25:04.237 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:25:04.237 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:25:04.237 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:25:04.237 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:25:04.237 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:25:04.237 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:25:04.237 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:04.237 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:04.237 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:04.237 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:25:04.237 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:25:04.237 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:25:04.237 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:25:04.237 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:04.496 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:04.496 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:04.496 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:25:04.496 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:25:04.496 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:25:04.496 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:04.496 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:04.496 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:25:04.496 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:04.496 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.085 ms 00:25:04.496 00:25:04.496 --- 10.0.0.3 ping statistics --- 00:25:04.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:04.496 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:25:04.496 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:25:04.496 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:04.496 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:25:04.496 00:25:04.496 --- 10.0.0.4 ping statistics --- 00:25:04.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:04.496 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:25:04.496 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:04.496 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:04.496 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:25:04.496 00:25:04.496 --- 10.0.0.1 ping statistics --- 00:25:04.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:04.496 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:25:04.496 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:25:04.496 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:04.496 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:25:04.496 00:25:04.496 --- 10.0.0.2 ping statistics --- 00:25:04.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:04.496 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:25:04.496 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:04.496 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@461 -- # return 0 00:25:04.496 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:04.496 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:04.496 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:04.496 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:04.496 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:04.496 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:04.496 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:04.496 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:25:04.496 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:04.496 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:04.496 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:25:04.496 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=97811 00:25:04.496 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:25:04.496 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 97811 00:25:04.496 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 97811 ']' 00:25:04.496 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:04.496 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:04.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:04.496 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:04.496 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:04.496 11:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:25:04.497 [2024-11-20 11:37:49.984646] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:25:04.497 [2024-11-20 11:37:49.986194] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:25:04.497 [2024-11-20 11:37:49.986280] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:04.755 [2024-11-20 11:37:50.147198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:04.755 [2024-11-20 11:37:50.196886] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:04.755 [2024-11-20 11:37:50.196955] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:04.755 [2024-11-20 11:37:50.196976] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:04.755 [2024-11-20 11:37:50.196990] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:04.755 [2024-11-20 11:37:50.197002] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:04.755 [2024-11-20 11:37:50.198066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:04.755 [2024-11-20 11:37:50.198163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:04.755 [2024-11-20 11:37:50.198180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:04.755 [2024-11-20 11:37:50.263186] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:25:04.755 [2024-11-20 11:37:50.263408] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:25:04.755 [2024-11-20 11:37:50.263712] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:25:04.755 [2024-11-20 11:37:50.263851] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:25:04.755 11:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:04.755 11:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:25:04.755 11:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:04.755 11:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:04.755 11:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:25:04.755 11:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:04.755 11:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:25:04.755 11:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.755 11:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:25:05.014 [2024-11-20 11:37:50.343207] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:05.014 11:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.014 11:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:25:05.014 11:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.014 11:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:25:05.014 Malloc0 00:25:05.014 11:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.014 11:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:25:05.014 11:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.014 11:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:25:05.014 Delay0 00:25:05.014 11:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.014 11:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:25:05.014 11:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.014 11:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:25:05.014 11:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.014 11:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:25:05.014 11:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.014 11:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:25:05.014 11:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.014 11:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:25:05.014 11:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.014 11:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:25:05.014 [2024-11-20 11:37:50.411260] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:05.014 11:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.014 11:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:25:05.014 11:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.014 11:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:25:05.014 11:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.014 11:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:25:05.014 [2024-11-20 11:37:50.599387] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:25:07.547 Initializing NVMe Controllers 00:25:07.547 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:25:07.547 controller IO queue size 128 less than required 00:25:07.547 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:25:07.547 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:25:07.547 Initialization complete. Launching workers. 00:25:07.547 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 26223 00:25:07.547 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 26284, failed to submit 66 00:25:07.547 success 26223, unsuccessful 61, failed 0 00:25:07.547 11:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:07.547 11:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.547 11:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:25:07.547 11:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.547 11:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:25:07.547 11:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:25:07.547 11:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:07.547 11:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:25:07.547 11:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:07.547 11:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:25:07.547 11:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:07.547 11:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:07.547 rmmod nvme_tcp 00:25:07.547 rmmod nvme_fabrics 00:25:07.547 rmmod nvme_keyring 00:25:07.547 11:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:07.547 11:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:25:07.547 11:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:25:07.547 11:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 97811 ']' 00:25:07.547 11:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 97811 00:25:07.547 11:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 97811 ']' 00:25:07.547 11:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 97811 00:25:07.547 11:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:25:07.547 11:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:07.547 11:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97811 00:25:07.547 killing process with pid 97811 00:25:07.547 11:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:07.547 11:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:07.547 11:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97811' 00:25:07.547 11:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 97811 00:25:07.547 11:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 97811 00:25:07.547 11:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:07.547 11:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:07.547 11:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:07.547 11:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:25:07.547 11:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:25:07.547 11:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:07.547 11:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:25:07.547 11:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:07.547 11:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:25:07.547 11:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:25:07.547 11:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:25:07.547 11:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:25:07.547 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:25:07.547 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:25:07.547 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:25:07.547 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:25:07.547 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:25:07.547 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:25:07.547 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:25:07.547 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:25:07.547 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:07.806 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:07.806 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@246 -- # remove_spdk_ns 00:25:07.806 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:07.806 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:07.806 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:07.806 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@300 -- # return 0 00:25:07.806 00:25:07.806 real 0m3.940s 00:25:07.806 user 0m8.900s 00:25:07.806 sys 0m1.504s 00:25:07.806 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:07.806 ************************************ 00:25:07.806 END TEST nvmf_abort 00:25:07.806 ************************************ 00:25:07.806 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:25:07.806 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:25:07.806 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:25:07.806 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:07.806 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:25:07.806 ************************************ 00:25:07.806 START TEST nvmf_ns_hotplug_stress 00:25:07.806 ************************************ 00:25:07.806 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:25:07.806 * Looking for test storage... 00:25:07.806 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:07.806 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:07.806 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:25:07.806 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:07.806 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:07.806 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:07.806 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:07.806 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:07.806 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:25:07.806 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:25:07.806 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:25:07.806 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:25:07.806 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:25:07.806 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:25:07.806 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:25:07.806 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:07.806 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:25:07.806 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:25:07.806 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:07.806 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:07.806 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:25:07.806 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:25:07.806 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:08.066 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:25:08.066 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:25:08.066 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:25:08.066 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:25:08.066 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:08.066 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:25:08.066 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:25:08.066 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:08.066 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:08.066 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:25:08.066 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:08.066 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:08.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:08.066 --rc genhtml_branch_coverage=1 00:25:08.066 --rc genhtml_function_coverage=1 00:25:08.066 --rc genhtml_legend=1 00:25:08.066 --rc geninfo_all_blocks=1 00:25:08.066 --rc geninfo_unexecuted_blocks=1 00:25:08.066 00:25:08.066 ' 00:25:08.066 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:08.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:08.066 --rc genhtml_branch_coverage=1 00:25:08.066 --rc genhtml_function_coverage=1 00:25:08.066 --rc genhtml_legend=1 00:25:08.066 --rc geninfo_all_blocks=1 00:25:08.066 --rc geninfo_unexecuted_blocks=1 00:25:08.066 00:25:08.066 ' 00:25:08.066 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:08.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:08.066 --rc genhtml_branch_coverage=1 00:25:08.066 --rc genhtml_function_coverage=1 00:25:08.066 --rc genhtml_legend=1 00:25:08.066 --rc geninfo_all_blocks=1 00:25:08.066 --rc geninfo_unexecuted_blocks=1 00:25:08.066 00:25:08.066 ' 00:25:08.066 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:08.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:08.066 --rc genhtml_branch_coverage=1 00:25:08.066 --rc genhtml_function_coverage=1 00:25:08.066 --rc genhtml_legend=1 00:25:08.066 --rc geninfo_all_blocks=1 00:25:08.066 --rc geninfo_unexecuted_blocks=1 00:25:08.066 00:25:08.066 ' 00:25:08.066 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:08.066 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:25:08.066 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:08.066 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:08.066 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:08.066 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:08.066 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:08.066 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:08.066 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:08.066 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:08.066 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:08.066 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:08.066 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:25:08.066 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=806d442c-08ba-4719-b76d-33169283459a 00:25:08.066 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:08.066 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:08.066 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:08.066 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:08.066 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:08.066 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:25:08.066 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:08.066 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:08.067 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:08.067 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.067 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.067 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.067 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:25:08.067 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.067 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:25:08.067 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:08.067 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:08.067 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:08.067 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:08.067 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:08.067 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:25:08.067 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:25:08.067 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:08.067 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:08.067 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:08.067 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:08.067 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:25:08.067 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:08.067 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:08.067 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:08.067 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:08.067 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:08.067 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:08.067 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:08.067 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:08.067 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:25:08.067 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:25:08.067 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:25:08.067 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:25:08.067 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:25:08.067 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@460 -- # nvmf_veth_init 00:25:08.067 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:08.067 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:25:08.067 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:25:08.067 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:08.067 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:08.067 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:25:08.067 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:08.067 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:25:08.067 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:08.067 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:25:08.067 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:08.067 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:08.067 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:08.067 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:08.067 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:08.067 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:08.067 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:25:08.067 Cannot find device "nvmf_init_br" 00:25:08.067 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # true 00:25:08.067 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:25:08.067 Cannot find device "nvmf_init_br2" 00:25:08.067 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # true 00:25:08.067 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:25:08.067 Cannot find device "nvmf_tgt_br" 00:25:08.067 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # true 00:25:08.067 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:25:08.067 Cannot find device "nvmf_tgt_br2" 00:25:08.067 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # true 00:25:08.067 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:25:08.067 Cannot find device "nvmf_init_br" 00:25:08.067 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # true 00:25:08.067 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:25:08.067 Cannot find device "nvmf_init_br2" 00:25:08.067 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # true 00:25:08.067 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:25:08.067 Cannot find device "nvmf_tgt_br" 00:25:08.067 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # true 00:25:08.067 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:25:08.067 Cannot find device "nvmf_tgt_br2" 00:25:08.067 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # true 00:25:08.067 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:25:08.067 Cannot find device "nvmf_br" 00:25:08.068 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # true 00:25:08.068 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:25:08.068 Cannot find device "nvmf_init_if" 00:25:08.068 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # true 00:25:08.068 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:25:08.068 Cannot find device "nvmf_init_if2" 00:25:08.068 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # true 00:25:08.068 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:08.068 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:08.068 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # true 00:25:08.068 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:08.068 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:08.068 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # true 00:25:08.068 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:25:08.068 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:08.068 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:25:08.068 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:08.068 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:08.068 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:08.068 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:08.327 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:08.327 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:25:08.327 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:25:08.327 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:25:08.327 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:25:08.327 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:25:08.327 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:25:08.327 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:25:08.327 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:25:08.327 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:25:08.327 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:08.327 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:08.327 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:08.327 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:25:08.327 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:25:08.327 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:25:08.327 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:25:08.327 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:08.327 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:08.327 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:08.327 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:25:08.327 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:25:08.327 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:25:08.327 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:08.327 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:08.327 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:25:08.327 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:08.327 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:25:08.327 00:25:08.327 --- 10.0.0.3 ping statistics --- 00:25:08.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:08.327 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:25:08.327 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:25:08.327 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:08.327 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.087 ms 00:25:08.327 00:25:08.327 --- 10.0.0.4 ping statistics --- 00:25:08.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:08.327 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:25:08.327 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:08.327 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:08.327 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:25:08.327 00:25:08.327 --- 10.0.0.1 ping statistics --- 00:25:08.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:08.327 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:25:08.327 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:25:08.327 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:08.327 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:25:08.327 00:25:08.327 --- 10.0.0.2 ping statistics --- 00:25:08.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:08.327 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:25:08.327 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:08.327 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@461 -- # return 0 00:25:08.327 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:08.327 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:08.327 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:08.327 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:08.327 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:08.327 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:08.327 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:08.327 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:25:08.327 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:08.327 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:08.327 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:25:08.327 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=98085 00:25:08.327 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:25:08.327 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 98085 00:25:08.327 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 98085 ']' 00:25:08.327 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:08.327 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:08.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:08.327 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:08.328 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:08.328 11:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:25:08.586 [2024-11-20 11:37:53.914025] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:25:08.587 [2024-11-20 11:37:53.915067] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:25:08.587 [2024-11-20 11:37:53.915127] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:08.587 [2024-11-20 11:37:54.064785] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:08.587 [2024-11-20 11:37:54.098267] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:08.587 [2024-11-20 11:37:54.098324] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:08.587 [2024-11-20 11:37:54.098337] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:08.587 [2024-11-20 11:37:54.098345] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:08.587 [2024-11-20 11:37:54.098352] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:08.587 [2024-11-20 11:37:54.099104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:08.587 [2024-11-20 11:37:54.099510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:08.587 [2024-11-20 11:37:54.099522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:08.587 [2024-11-20 11:37:54.151021] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:25:08.587 [2024-11-20 11:37:54.151585] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:25:08.587 [2024-11-20 11:37:54.151678] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:25:08.587 [2024-11-20 11:37:54.152345] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:25:08.845 11:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:08.845 11:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:25:08.845 11:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:08.845 11:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:08.845 11:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:25:08.845 11:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:08.845 11:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:25:08.845 11:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:09.103 [2024-11-20 11:37:54.476521] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:09.103 11:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:25:09.363 11:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:25:09.622 [2024-11-20 11:37:55.041047] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:09.622 11:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:25:09.880 11:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:25:10.139 Malloc0 00:25:10.139 11:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:25:10.398 Delay0 00:25:10.398 11:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:10.657 11:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:25:11.223 NULL1 00:25:11.223 11:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:25:11.481 11:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=98208 00:25:11.481 11:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:25:11.481 11:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98208 00:25:11.481 11:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:12.921 Read completed with error (sct=0, sc=11) 00:25:12.921 11:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:12.921 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:12.921 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:12.921 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:12.921 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:12.921 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:12.921 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:12.921 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:13.179 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:13.179 11:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:25:13.179 11:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:25:13.437 true 00:25:13.437 11:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98208 00:25:13.437 11:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:14.007 11:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:14.574 11:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:25:14.574 11:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:25:14.832 true 00:25:14.832 11:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98208 00:25:14.832 11:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:16.204 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:16.204 11:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:16.204 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:16.204 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:16.204 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:16.205 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:16.205 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:16.463 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:16.463 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:16.463 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:16.463 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:16.463 11:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:25:16.463 11:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:25:17.028 true 00:25:17.028 11:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98208 00:25:17.028 11:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:17.594 11:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:17.595 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:17.595 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:17.595 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:17.852 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:17.852 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:17.852 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:17.852 11:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:25:17.852 11:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:25:18.418 true 00:25:18.418 11:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98208 00:25:18.418 11:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:18.983 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:18.983 11:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:18.983 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:18.983 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:19.241 11:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:25:19.241 11:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:25:19.499 true 00:25:19.499 11:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98208 00:25:19.499 11:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:20.065 11:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:20.323 11:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:25:20.323 11:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:25:20.890 true 00:25:20.890 11:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98208 00:25:20.890 11:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:22.265 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:22.265 11:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:22.265 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:22.265 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:22.265 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:22.265 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:22.265 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:22.265 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:22.265 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:22.525 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:22.525 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:22.525 11:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:25:22.525 11:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:25:22.783 true 00:25:22.783 11:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98208 00:25:22.783 11:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:23.741 11:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:23.741 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:23.741 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:23.741 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:23.741 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:23.741 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:23.741 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:23.741 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:23.741 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:23.741 11:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:25:23.741 11:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:25:24.308 true 00:25:24.308 11:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98208 00:25:24.308 11:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:24.876 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:24.876 11:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:24.876 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:24.876 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:24.876 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:25.135 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:25.135 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:25.135 11:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:25:25.135 11:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:25:25.703 true 00:25:25.703 11:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98208 00:25:25.703 11:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:25.961 11:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:26.220 11:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:25:26.220 11:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:25:26.480 true 00:25:26.480 11:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98208 00:25:26.480 11:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:26.739 11:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:26.998 11:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:25:26.998 11:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:25:27.256 true 00:25:27.256 11:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98208 00:25:27.256 11:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:27.822 11:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:28.108 11:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:25:28.108 11:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:25:28.366 true 00:25:28.366 11:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98208 00:25:28.367 11:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:29.302 11:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:29.302 11:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:25:29.303 11:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:25:29.561 true 00:25:29.561 11:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98208 00:25:29.561 11:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:30.129 11:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:30.129 11:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:25:30.129 11:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:25:30.388 true 00:25:30.646 11:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98208 00:25:30.646 11:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:30.905 11:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:31.162 11:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:25:31.162 11:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:25:31.420 true 00:25:31.420 11:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98208 00:25:31.420 11:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:31.678 11:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:32.244 11:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:25:32.244 11:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:25:32.502 true 00:25:32.502 11:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98208 00:25:32.502 11:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:33.068 11:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:33.329 11:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:25:33.329 11:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:25:33.904 true 00:25:33.904 11:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98208 00:25:33.904 11:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:33.904 11:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:34.471 11:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:25:34.471 11:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:25:34.730 true 00:25:34.730 11:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98208 00:25:34.730 11:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:34.989 11:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:35.555 11:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:25:35.555 11:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:25:35.813 true 00:25:35.813 11:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98208 00:25:35.813 11:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:36.072 11:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:36.330 11:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:25:36.330 11:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:25:36.896 true 00:25:36.896 11:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98208 00:25:36.896 11:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:38.290 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:38.290 11:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:38.290 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:38.290 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:38.290 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:38.290 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:38.290 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:38.290 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:38.585 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:38.585 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:38.585 11:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:25:38.585 11:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:25:38.844 true 00:25:38.844 11:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98208 00:25:38.844 11:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:39.411 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:39.411 11:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:39.669 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:39.669 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:39.669 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:39.669 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:39.669 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:39.927 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:39.927 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:39.927 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:39.927 11:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:25:39.927 11:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:25:40.186 true 00:25:40.186 11:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98208 00:25:40.186 11:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:41.120 11:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:41.120 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:41.120 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:41.120 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:41.121 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:41.121 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:41.379 11:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:25:41.379 11:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:25:41.637 true 00:25:41.637 11:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98208 00:25:41.638 11:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:41.897 Initializing NVMe Controllers 00:25:41.897 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:25:41.897 Controller IO queue size 128, less than required. 00:25:41.897 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:41.897 Controller IO queue size 128, less than required. 00:25:41.897 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:41.897 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:41.897 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:41.897 Initialization complete. Launching workers. 00:25:41.897 ======================================================== 00:25:41.897 Latency(us) 00:25:41.897 Device Information : IOPS MiB/s Average min max 00:25:41.897 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2520.31 1.23 28313.83 3211.48 1072926.56 00:25:41.897 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 10201.39 4.98 12548.26 3096.78 661697.79 00:25:41.897 ======================================================== 00:25:41.897 Total : 12721.69 6.21 15671.59 3096.78 1072926.56 00:25:41.897 00:25:41.897 11:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:42.155 11:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:25:42.155 11:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:25:42.413 true 00:25:42.413 11:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98208 00:25:42.413 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (98208) - No such process 00:25:42.413 11:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 98208 00:25:42.413 11:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:42.672 11:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:25:42.930 11:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:25:42.930 11:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:25:42.930 11:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:25:42.930 11:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:25:42.930 11:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:25:43.204 null0 00:25:43.204 11:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:25:43.204 11:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:25:43.204 11:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:25:43.474 null1 00:25:43.474 11:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:25:43.474 11:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:25:43.474 11:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:25:43.733 null2 00:25:43.733 11:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:25:43.733 11:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:25:43.733 11:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:25:43.992 null3 00:25:43.992 11:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:25:43.992 11:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:25:43.992 11:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:25:44.250 null4 00:25:44.250 11:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:25:44.250 11:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:25:44.250 11:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:25:44.817 null5 00:25:44.817 11:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:25:44.817 11:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:25:44.817 11:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:25:44.817 null6 00:25:45.075 11:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:25:45.075 11:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:25:45.075 11:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:25:45.075 null7 00:25:45.335 11:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:25:45.335 11:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:25:45.335 11:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:25:45.335 11:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:25:45.335 11:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:25:45.335 11:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:25:45.335 11:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:25:45.335 11:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:25:45.335 11:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:25:45.335 11:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:25:45.335 11:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:45.335 11:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:25:45.335 11:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:25:45.335 11:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:25:45.335 11:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:25:45.335 11:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:25:45.335 11:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:25:45.335 11:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:25:45.335 11:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:25:45.335 11:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:25:45.335 11:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:25:45.335 11:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:25:45.335 11:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:45.335 11:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:25:45.335 11:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:25:45.335 11:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:25:45.335 11:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:45.335 11:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:25:45.335 11:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:25:45.335 11:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:25:45.335 11:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:25:45.335 11:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:25:45.336 11:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:25:45.336 11:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:25:45.336 11:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:25:45.336 11:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:25:45.336 11:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:25:45.336 11:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:25:45.336 11:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:25:45.336 11:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:25:45.336 11:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:45.336 11:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:25:45.336 11:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:25:45.336 11:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:25:45.336 11:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:25:45.336 11:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:25:45.336 11:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:45.336 11:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:25:45.336 11:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:25:45.336 11:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:25:45.336 11:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:25:45.336 11:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:25:45.336 11:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:25:45.336 11:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:25:45.336 11:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:25:45.336 11:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:25:45.336 11:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:45.336 11:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 99121 99122 99124 99125 99128 99129 99130 99133 00:25:45.336 11:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:25:45.336 11:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:25:45.336 11:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:25:45.336 11:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:25:45.336 11:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:25:45.336 11:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:25:45.336 11:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:45.336 11:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:25:45.336 11:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:25:45.336 11:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:45.336 11:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:25:45.595 11:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:25:45.595 11:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:25:45.595 11:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:25:45.595 11:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:25:45.595 11:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:25:45.595 11:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:45.595 11:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:25:45.595 11:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:25:45.853 11:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:45.853 11:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:45.853 11:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:25:45.853 11:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:45.853 11:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:45.853 11:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:25:45.853 11:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:45.853 11:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:45.853 11:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:25:45.853 11:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:45.853 11:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:45.853 11:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:25:45.853 11:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:45.853 11:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:45.853 11:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:25:46.112 11:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:46.112 11:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:46.112 11:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:25:46.112 11:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:46.112 11:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:46.112 11:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:25:46.112 11:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:25:46.112 11:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:46.112 11:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:46.112 11:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:25:46.112 11:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:25:46.112 11:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:25:46.112 11:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:25:46.371 11:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:25:46.371 11:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:46.371 11:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:46.371 11:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:25:46.371 11:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:25:46.371 11:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:46.371 11:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:46.371 11:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:46.371 11:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:25:46.629 11:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:25:46.629 11:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:46.629 11:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:46.629 11:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:25:46.629 11:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:46.629 11:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:46.630 11:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:25:46.630 11:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:46.630 11:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:46.630 11:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:25:46.887 11:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:25:46.887 11:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:25:46.887 11:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:46.887 11:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:46.887 11:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:25:46.887 11:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:46.887 11:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:46.887 11:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:25:46.887 11:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:25:46.887 11:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:46.887 11:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:46.888 11:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:25:46.888 11:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:25:46.888 11:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:25:47.146 11:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:47.146 11:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:47.146 11:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:25:47.146 11:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:47.146 11:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:47.146 11:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:25:47.146 11:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:25:47.146 11:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:47.146 11:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:47.146 11:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:25:47.146 11:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:47.146 11:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:47.146 11:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:25:47.146 11:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:47.405 11:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:25:47.405 11:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:47.405 11:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:47.405 11:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:25:47.405 11:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:25:47.405 11:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:25:47.405 11:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:25:47.405 11:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:47.405 11:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:47.405 11:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:25:47.405 11:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:25:47.664 11:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:47.664 11:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:47.664 11:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:25:47.664 11:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:47.664 11:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:47.664 11:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:25:47.664 11:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:25:47.664 11:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:47.664 11:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:47.664 11:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:25:47.664 11:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:47.664 11:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:47.664 11:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:25:47.664 11:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:25:47.923 11:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:47.923 11:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:47.923 11:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:25:47.923 11:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:47.923 11:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:47.923 11:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:25:47.923 11:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:25:47.923 11:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:47.923 11:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:25:47.923 11:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:25:48.194 11:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:48.194 11:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:48.194 11:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:25:48.194 11:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:48.194 11:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:48.194 11:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:25:48.194 11:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:25:48.194 11:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:48.194 11:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:48.194 11:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:25:48.482 11:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:48.482 11:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:48.482 11:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:25:48.482 11:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:25:48.482 11:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:48.482 11:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:48.482 11:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:25:48.482 11:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:25:48.482 11:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:48.482 11:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:48.482 11:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:25:48.482 11:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:25:48.740 11:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:48.740 11:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:48.740 11:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:25:48.740 11:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:48.740 11:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:48.740 11:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:25:48.740 11:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:48.741 11:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:48.741 11:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:25:48.741 11:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:25:48.741 11:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:25:48.741 11:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:48.741 11:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:48.741 11:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:25:48.741 11:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:48.741 11:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:25:48.999 11:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:25:48.999 11:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:25:48.999 11:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:48.999 11:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:48.999 11:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:25:48.999 11:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:48.999 11:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:48.999 11:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:25:48.999 11:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:25:48.999 11:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:25:48.999 11:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:48.999 11:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:48.999 11:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:25:48.999 11:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:48.999 11:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:48.999 11:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:25:49.257 11:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:25:49.257 11:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:25:49.257 11:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:49.258 11:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:49.258 11:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:25:49.258 11:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:49.258 11:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:49.258 11:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:25:49.258 11:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:49.258 11:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:49.258 11:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:25:49.258 11:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:49.258 11:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:49.258 11:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:25:49.258 11:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:25:49.517 11:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:49.517 11:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:49.517 11:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:49.517 11:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:25:49.517 11:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:25:49.517 11:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:25:49.517 11:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:49.517 11:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:49.517 11:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:25:49.517 11:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:25:49.776 11:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:25:49.776 11:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:49.776 11:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:49.776 11:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:25:49.776 11:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:49.776 11:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:49.776 11:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:25:49.776 11:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:49.776 11:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:49.776 11:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:25:49.776 11:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:49.777 11:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:49.777 11:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:25:49.777 11:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:49.777 11:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:49.777 11:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:25:49.777 11:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:25:50.035 11:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:25:50.036 11:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:50.036 11:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:50.036 11:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:25:50.036 11:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:25:50.036 11:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:50.036 11:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:25:50.294 11:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:25:50.294 11:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:25:50.294 11:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:50.294 11:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:50.294 11:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:25:50.294 11:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:50.294 11:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:50.294 11:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:25:50.294 11:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:25:50.294 11:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:50.294 11:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:50.294 11:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:25:50.553 11:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:50.553 11:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:50.553 11:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:25:50.553 11:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:50.553 11:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:50.553 11:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:25:50.553 11:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:50.553 11:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:50.553 11:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:25:50.553 11:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:50.553 11:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:50.553 11:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:25:50.553 11:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:25:50.553 11:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:50.553 11:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:50.553 11:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:25:50.553 11:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:25:50.811 11:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:50.811 11:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:25:50.811 11:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:25:50.811 11:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:25:50.811 11:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:50.811 11:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:50.811 11:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:25:50.811 11:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:25:51.068 11:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:51.068 11:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:51.068 11:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:25:51.068 11:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:51.068 11:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:51.068 11:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:25:51.068 11:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:25:51.068 11:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:51.068 11:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:51.068 11:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:25:51.068 11:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:51.068 11:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:51.069 11:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:25:51.069 11:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:25:51.069 11:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:51.069 11:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:51.069 11:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:25:51.069 11:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:51.069 11:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:51.069 11:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:25:51.326 11:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:51.326 11:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:25:51.326 11:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:25:51.326 11:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:51.326 11:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:51.326 11:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:25:51.326 11:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:25:51.326 11:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:51.326 11:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:51.326 11:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:25:51.612 11:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:25:51.612 11:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:25:51.612 11:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:51.612 11:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:51.612 11:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:25:51.612 11:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:51.612 11:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:51.612 11:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:51.612 11:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:51.612 11:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:25:51.870 11:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:51.870 11:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:51.870 11:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:51.870 11:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:51.870 11:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:25:51.870 11:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:51.870 11:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:51.870 11:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:51.870 11:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:51.870 11:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:52.127 11:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:52.127 11:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:52.127 11:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:52.127 11:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:52.127 11:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:25:52.127 11:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:25:52.127 11:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:52.127 11:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:25:52.127 11:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:52.127 11:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:25:52.127 11:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:52.127 11:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:52.127 rmmod nvme_tcp 00:25:52.127 rmmod nvme_fabrics 00:25:52.127 rmmod nvme_keyring 00:25:52.127 11:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:52.127 11:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:25:52.127 11:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:25:52.127 11:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 98085 ']' 00:25:52.127 11:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 98085 00:25:52.127 11:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 98085 ']' 00:25:52.127 11:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 98085 00:25:52.127 11:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:25:52.127 11:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:52.127 11:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 98085 00:25:52.384 killing process with pid 98085 00:25:52.384 11:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:52.384 11:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:52.384 11:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 98085' 00:25:52.384 11:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 98085 00:25:52.384 11:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 98085 00:25:52.384 11:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:52.384 11:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:52.384 11:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:52.384 11:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:25:52.384 11:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:25:52.384 11:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:52.384 11:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:25:52.384 11:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:52.384 11:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:25:52.384 11:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:25:52.384 11:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:25:52.384 11:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:25:52.384 11:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:25:52.384 11:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:25:52.384 11:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:25:52.384 11:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:25:52.384 11:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:25:52.384 11:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:25:52.641 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:25:52.641 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:25:52.641 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:52.641 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:52.641 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@246 -- # remove_spdk_ns 00:25:52.641 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:52.641 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:52.641 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:52.641 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@300 -- # return 0 00:25:52.641 00:25:52.641 real 0m44.890s 00:25:52.641 user 3m23.900s 00:25:52.641 sys 0m19.257s 00:25:52.641 ************************************ 00:25:52.641 END TEST nvmf_ns_hotplug_stress 00:25:52.641 ************************************ 00:25:52.641 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:52.641 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:25:52.641 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:25:52.641 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:25:52.641 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:52.641 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:25:52.641 ************************************ 00:25:52.641 START TEST nvmf_delete_subsystem 00:25:52.641 ************************************ 00:25:52.641 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:25:52.900 * Looking for test storage... 00:25:52.900 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:52.900 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:52.900 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:25:52.900 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:52.900 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:52.900 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:52.900 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:52.900 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:52.900 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:25:52.900 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:25:52.900 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:25:52.900 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:25:52.900 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:25:52.900 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:25:52.900 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:25:52.900 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:52.900 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:25:52.900 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:25:52.900 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:52.900 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:52.900 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:25:52.900 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:25:52.901 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:52.901 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:25:52.901 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:25:52.901 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:25:52.901 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:25:52.901 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:52.901 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:25:52.901 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:25:52.901 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:52.901 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:52.901 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:25:52.901 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:52.901 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:52.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:52.901 --rc genhtml_branch_coverage=1 00:25:52.901 --rc genhtml_function_coverage=1 00:25:52.901 --rc genhtml_legend=1 00:25:52.901 --rc geninfo_all_blocks=1 00:25:52.901 --rc geninfo_unexecuted_blocks=1 00:25:52.901 00:25:52.901 ' 00:25:52.901 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:52.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:52.901 --rc genhtml_branch_coverage=1 00:25:52.901 --rc genhtml_function_coverage=1 00:25:52.901 --rc genhtml_legend=1 00:25:52.901 --rc geninfo_all_blocks=1 00:25:52.901 --rc geninfo_unexecuted_blocks=1 00:25:52.901 00:25:52.901 ' 00:25:52.901 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:52.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:52.901 --rc genhtml_branch_coverage=1 00:25:52.901 --rc genhtml_function_coverage=1 00:25:52.901 --rc genhtml_legend=1 00:25:52.901 --rc geninfo_all_blocks=1 00:25:52.901 --rc geninfo_unexecuted_blocks=1 00:25:52.901 00:25:52.901 ' 00:25:52.901 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:52.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:52.901 --rc genhtml_branch_coverage=1 00:25:52.901 --rc genhtml_function_coverage=1 00:25:52.901 --rc genhtml_legend=1 00:25:52.901 --rc geninfo_all_blocks=1 00:25:52.901 --rc geninfo_unexecuted_blocks=1 00:25:52.901 00:25:52.901 ' 00:25:52.901 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:52.901 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:25:52.901 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:52.901 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:52.901 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:52.901 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:52.901 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:52.901 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:52.901 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:52.901 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:52.901 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:52.901 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:52.901 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:25:52.901 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=806d442c-08ba-4719-b76d-33169283459a 00:25:52.901 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:52.901 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:52.901 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:52.901 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:52.901 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:52.901 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:25:52.901 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:52.901 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:52.901 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:52.901 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.901 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.901 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.901 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:25:52.901 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.901 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:25:52.901 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:52.901 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:52.901 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:52.901 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:52.901 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:52.901 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:25:52.901 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:25:52.901 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:52.901 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:52.901 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:52.901 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:25:52.902 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:52.902 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:52.902 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:52.902 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:52.902 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:52.902 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:52.902 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:52.902 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:52.902 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:25:52.902 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:25:52.902 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:25:52.902 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:25:52.902 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:25:52.902 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@460 -- # nvmf_veth_init 00:25:52.902 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:52.902 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:25:52.902 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:25:52.902 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:52.902 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:52.902 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:25:52.902 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:52.902 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:25:52.902 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:52.902 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:25:52.902 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:52.902 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:52.902 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:52.902 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:52.902 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:52.902 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:52.902 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:25:52.902 Cannot find device "nvmf_init_br" 00:25:52.902 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # true 00:25:52.902 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:25:52.902 Cannot find device "nvmf_init_br2" 00:25:52.902 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # true 00:25:52.902 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:25:52.902 Cannot find device "nvmf_tgt_br" 00:25:52.902 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # true 00:25:52.902 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:25:52.902 Cannot find device "nvmf_tgt_br2" 00:25:52.902 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # true 00:25:52.902 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:25:52.902 Cannot find device "nvmf_init_br" 00:25:52.902 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # true 00:25:52.902 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:25:52.902 Cannot find device "nvmf_init_br2" 00:25:52.902 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # true 00:25:52.902 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:25:52.902 Cannot find device "nvmf_tgt_br" 00:25:52.902 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # true 00:25:52.902 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:25:52.902 Cannot find device "nvmf_tgt_br2" 00:25:52.902 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # true 00:25:52.902 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:25:53.160 Cannot find device "nvmf_br" 00:25:53.160 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # true 00:25:53.160 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:25:53.160 Cannot find device "nvmf_init_if" 00:25:53.160 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # true 00:25:53.160 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:25:53.160 Cannot find device "nvmf_init_if2" 00:25:53.160 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # true 00:25:53.160 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:53.160 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:53.160 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # true 00:25:53.160 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:53.160 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:53.160 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # true 00:25:53.160 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:25:53.160 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:53.160 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:25:53.160 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:53.160 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:53.161 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:53.161 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:53.161 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:53.161 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:25:53.161 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:25:53.161 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:25:53.161 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:25:53.161 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:25:53.161 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:25:53.161 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:25:53.161 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:25:53.161 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:25:53.161 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:53.161 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:53.161 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:53.161 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:25:53.161 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:25:53.161 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:25:53.161 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:25:53.161 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:53.161 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:53.161 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:53.161 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:25:53.434 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:25:53.434 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:25:53.434 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:53.434 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:53.434 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:25:53.434 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:53.434 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:25:53.434 00:25:53.434 --- 10.0.0.3 ping statistics --- 00:25:53.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:53.434 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:25:53.434 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:25:53.434 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:53.434 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:25:53.434 00:25:53.434 --- 10.0.0.4 ping statistics --- 00:25:53.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:53.434 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:25:53.434 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:53.434 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:53.434 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:25:53.434 00:25:53.434 --- 10.0.0.1 ping statistics --- 00:25:53.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:53.434 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:25:53.434 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:25:53.434 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:53.434 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:25:53.434 00:25:53.434 --- 10.0.0.2 ping statistics --- 00:25:53.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:53.434 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:25:53.434 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:53.434 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@461 -- # return 0 00:25:53.434 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:53.434 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:53.434 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:53.434 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:53.434 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:53.434 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:53.434 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:53.434 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:25:53.434 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:53.434 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:53.434 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:25:53.434 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=100497 00:25:53.435 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:25:53.435 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 100497 00:25:53.435 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 100497 ']' 00:25:53.435 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:53.435 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:53.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:53.435 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:53.435 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:53.435 11:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:25:53.435 [2024-11-20 11:38:38.869971] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:25:53.435 [2024-11-20 11:38:38.871221] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:25:53.435 [2024-11-20 11:38:38.871289] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:53.721 [2024-11-20 11:38:39.029787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:53.721 [2024-11-20 11:38:39.070112] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:53.721 [2024-11-20 11:38:39.070173] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:53.721 [2024-11-20 11:38:39.070187] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:53.721 [2024-11-20 11:38:39.070196] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:53.721 [2024-11-20 11:38:39.070206] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:53.721 [2024-11-20 11:38:39.071215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:53.721 [2024-11-20 11:38:39.071229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:53.721 [2024-11-20 11:38:39.127960] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:25:53.721 [2024-11-20 11:38:39.128303] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:25:53.721 [2024-11-20 11:38:39.129265] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:25:53.721 11:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:53.721 11:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:25:53.721 11:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:53.721 11:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:53.721 11:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:25:53.721 11:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:53.721 11:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:53.721 11:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.721 11:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:25:53.721 [2024-11-20 11:38:39.212038] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:53.721 11:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.721 11:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:25:53.721 11:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.721 11:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:25:53.721 11:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.721 11:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:25:53.721 11:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.721 11:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:25:53.721 [2024-11-20 11:38:39.232374] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:53.721 11:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.721 11:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:25:53.721 11:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.721 11:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:25:53.721 NULL1 00:25:53.721 11:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.721 11:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:25:53.721 11:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.721 11:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:25:53.721 Delay0 00:25:53.721 11:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.721 11:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:53.721 11:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.721 11:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:25:53.721 11:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.721 11:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=100536 00:25:53.721 11:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:25:53.722 11:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:25:53.979 [2024-11-20 11:38:39.424203] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:55.880 11:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:55.880 11:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.880 11:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 Write completed with error (sct=0, sc=8) 00:25:55.880 starting I/O failed: -6 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 Write completed with error (sct=0, sc=8) 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 starting I/O failed: -6 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 Write completed with error (sct=0, sc=8) 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 starting I/O failed: -6 00:25:55.880 Write completed with error (sct=0, sc=8) 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 Write completed with error (sct=0, sc=8) 00:25:55.880 Write completed with error (sct=0, sc=8) 00:25:55.880 starting I/O failed: -6 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 Write completed with error (sct=0, sc=8) 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 starting I/O failed: -6 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 Write completed with error (sct=0, sc=8) 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 starting I/O failed: -6 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 Write completed with error (sct=0, sc=8) 00:25:55.880 starting I/O failed: -6 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 starting I/O failed: -6 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 Write completed with error (sct=0, sc=8) 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 Write completed with error (sct=0, sc=8) 00:25:55.880 starting I/O failed: -6 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 Write completed with error (sct=0, sc=8) 00:25:55.880 Write completed with error (sct=0, sc=8) 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 starting I/O failed: -6 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 Write completed with error (sct=0, sc=8) 00:25:55.880 Write completed with error (sct=0, sc=8) 00:25:55.880 Write completed with error (sct=0, sc=8) 00:25:55.880 starting I/O failed: -6 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 starting I/O failed: -6 00:25:55.880 Write completed with error (sct=0, sc=8) 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 Write completed with error (sct=0, sc=8) 00:25:55.880 starting I/O failed: -6 00:25:55.880 Write completed with error (sct=0, sc=8) 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 Write completed with error (sct=0, sc=8) 00:25:55.880 starting I/O failed: -6 00:25:55.880 Write completed with error (sct=0, sc=8) 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 Write completed with error (sct=0, sc=8) 00:25:55.880 Write completed with error (sct=0, sc=8) 00:25:55.880 starting I/O failed: -6 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 starting I/O failed: -6 00:25:55.880 Write completed with error (sct=0, sc=8) 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 starting I/O failed: -6 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 starting I/O failed: -6 00:25:55.880 Write completed with error (sct=0, sc=8) 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 Write completed with error (sct=0, sc=8) 00:25:55.880 starting I/O failed: -6 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 starting I/O failed: -6 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 starting I/O failed: -6 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 starting I/O failed: -6 00:25:55.880 Write completed with error (sct=0, sc=8) 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 Write completed with error (sct=0, sc=8) 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 starting I/O failed: -6 00:25:55.880 [2024-11-20 11:38:41.461460] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b9dc30 is same with the state(6) to be set 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 starting I/O failed: -6 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 Write completed with error (sct=0, sc=8) 00:25:55.880 starting I/O failed: -6 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 starting I/O failed: -6 00:25:55.880 Write completed with error (sct=0, sc=8) 00:25:55.880 Write completed with error (sct=0, sc=8) 00:25:55.880 starting I/O failed: -6 00:25:55.880 Write completed with error (sct=0, sc=8) 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 starting I/O failed: -6 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 starting I/O failed: -6 00:25:55.880 Write completed with error (sct=0, sc=8) 00:25:55.880 Write completed with error (sct=0, sc=8) 00:25:55.880 starting I/O failed: -6 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 starting I/O failed: -6 00:25:55.880 Write completed with error (sct=0, sc=8) 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 starting I/O failed: -6 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 starting I/O failed: -6 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 starting I/O failed: -6 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 starting I/O failed: -6 00:25:55.880 Write completed with error (sct=0, sc=8) 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 starting I/O failed: -6 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 starting I/O failed: -6 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 starting I/O failed: -6 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 starting I/O failed: -6 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 starting I/O failed: -6 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 starting I/O failed: -6 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 Write completed with error (sct=0, sc=8) 00:25:55.880 starting I/O failed: -6 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 starting I/O failed: -6 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 starting I/O failed: -6 00:25:55.880 Write completed with error (sct=0, sc=8) 00:25:55.880 Write completed with error (sct=0, sc=8) 00:25:55.880 starting I/O failed: -6 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 starting I/O failed: -6 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.880 Read completed with error (sct=0, sc=8) 00:25:55.881 Read completed with error (sct=0, sc=8) 00:25:55.881 Write completed with error (sct=0, sc=8) 00:25:55.881 starting I/O failed: -6 00:25:55.881 Read completed with error (sct=0, sc=8) 00:25:55.881 Read completed with error (sct=0, sc=8) 00:25:55.881 Read completed with error (sct=0, sc=8) 00:25:55.881 starting I/O failed: -6 00:25:55.881 Read completed with error (sct=0, sc=8) 00:25:55.881 Write completed with error (sct=0, sc=8) 00:25:55.881 starting I/O failed: -6 00:25:55.881 Write completed with error (sct=0, sc=8) 00:25:55.881 Read completed with error (sct=0, sc=8) 00:25:55.881 starting I/O failed: -6 00:25:55.881 [2024-11-20 11:38:41.462182] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f006c000c40 is same with the state(6) to be set 00:25:55.881 Read completed with error (sct=0, sc=8) 00:25:55.881 Read completed with error (sct=0, sc=8) 00:25:55.881 Read completed with error (sct=0, sc=8) 00:25:55.881 starting I/O failed: -6 00:25:55.881 starting I/O failed: -6 00:25:55.881 Write completed with error (sct=0, sc=8) 00:25:55.881 starting I/O failed: -6 00:25:55.881 Read completed with error (sct=0, sc=8) 00:25:55.881 Read completed with error (sct=0, sc=8) 00:25:55.881 Read completed with error (sct=0, sc=8) 00:25:55.881 Write completed with error (sct=0, sc=8) 00:25:55.881 Read completed with error (sct=0, sc=8) 00:25:55.881 Read completed with error (sct=0, sc=8) 00:25:55.881 Read completed with error (sct=0, sc=8) 00:25:55.881 Read completed with error (sct=0, sc=8) 00:25:55.881 Write completed with error (sct=0, sc=8) 00:25:55.881 Write completed with error (sct=0, sc=8) 00:25:55.881 Write completed with error (sct=0, sc=8) 00:25:55.881 Read completed with error (sct=0, sc=8) 00:25:55.881 Read completed with error (sct=0, sc=8) 00:25:55.881 Read completed with error (sct=0, sc=8) 00:25:55.881 Read completed with error (sct=0, sc=8) 00:25:55.881 Read completed with error (sct=0, sc=8) 00:25:55.881 Read completed with error (sct=0, sc=8) 00:25:55.881 Write completed with error (sct=0, sc=8) 00:25:55.881 Write completed with error (sct=0, sc=8) 00:25:55.881 Read completed with error (sct=0, sc=8) 00:25:55.881 Read completed with error (sct=0, sc=8) 00:25:55.881 Read completed with error (sct=0, sc=8) 00:25:55.881 Read completed with error (sct=0, sc=8) 00:25:55.881 Read completed with error (sct=0, sc=8) 00:25:55.881 Write completed with error (sct=0, sc=8) 00:25:55.881 Read completed with error (sct=0, sc=8) 00:25:55.881 Read completed with error (sct=0, sc=8) 00:25:55.881 Read completed with error (sct=0, sc=8) 00:25:55.881 Read completed with error (sct=0, sc=8) 00:25:55.881 Write completed with error (sct=0, sc=8) 00:25:55.881 Read completed with error (sct=0, sc=8) 00:25:55.881 Read completed with error (sct=0, sc=8) 00:25:55.881 Read completed with error (sct=0, sc=8) 00:25:55.881 Write completed with error (sct=0, sc=8) 00:25:55.881 Read completed with error (sct=0, sc=8) 00:25:55.881 Write completed with error (sct=0, sc=8) 00:25:55.881 Write completed with error (sct=0, sc=8) 00:25:55.881 Read completed with error (sct=0, sc=8) 00:25:55.881 Read completed with error (sct=0, sc=8) 00:25:55.881 Write completed with error (sct=0, sc=8) 00:25:55.881 Read completed with error (sct=0, sc=8) 00:25:55.881 Read completed with error (sct=0, sc=8) 00:25:55.881 Write completed with error (sct=0, sc=8) 00:25:55.881 Write completed with error (sct=0, sc=8) 00:25:55.881 Read completed with error (sct=0, sc=8) 00:25:55.881 Read completed with error (sct=0, sc=8) 00:25:55.881 Read completed with error (sct=0, sc=8) 00:25:55.881 Read completed with error (sct=0, sc=8) 00:25:55.881 Write completed with error (sct=0, sc=8) 00:25:55.881 Read completed with error (sct=0, sc=8) 00:25:55.881 Read completed with error (sct=0, sc=8) 00:25:57.258 [2024-11-20 11:38:42.442356] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b99ee0 is same with the state(6) to be set 00:25:57.258 Read completed with error (sct=0, sc=8) 00:25:57.258 Read completed with error (sct=0, sc=8) 00:25:57.258 Write completed with error (sct=0, sc=8) 00:25:57.258 Write completed with error (sct=0, sc=8) 00:25:57.258 Read completed with error (sct=0, sc=8) 00:25:57.258 Read completed with error (sct=0, sc=8) 00:25:57.258 Write completed with error (sct=0, sc=8) 00:25:57.258 Write completed with error (sct=0, sc=8) 00:25:57.258 Read completed with error (sct=0, sc=8) 00:25:57.258 Read completed with error (sct=0, sc=8) 00:25:57.258 Read completed with error (sct=0, sc=8) 00:25:57.258 Read completed with error (sct=0, sc=8) 00:25:57.258 Write completed with error (sct=0, sc=8) 00:25:57.258 Read completed with error (sct=0, sc=8) 00:25:57.258 Write completed with error (sct=0, sc=8) 00:25:57.258 Write completed with error (sct=0, sc=8) 00:25:57.258 Read completed with error (sct=0, sc=8) 00:25:57.258 Read completed with error (sct=0, sc=8) 00:25:57.258 Read completed with error (sct=0, sc=8) 00:25:57.258 Read completed with error (sct=0, sc=8) 00:25:57.258 Read completed with error (sct=0, sc=8) 00:25:57.258 Read completed with error (sct=0, sc=8) 00:25:57.258 Write completed with error (sct=0, sc=8) 00:25:57.258 Read completed with error (sct=0, sc=8) 00:25:57.258 Read completed with error (sct=0, sc=8) 00:25:57.258 Read completed with error (sct=0, sc=8) 00:25:57.258 [2024-11-20 11:38:42.458462] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b9da50 is same with the state(6) to be set 00:25:57.258 Read completed with error (sct=0, sc=8) 00:25:57.258 Read completed with error (sct=0, sc=8) 00:25:57.258 Write completed with error (sct=0, sc=8) 00:25:57.258 Read completed with error (sct=0, sc=8) 00:25:57.258 Read completed with error (sct=0, sc=8) 00:25:57.258 Read completed with error (sct=0, sc=8) 00:25:57.258 Read completed with error (sct=0, sc=8) 00:25:57.258 Read completed with error (sct=0, sc=8) 00:25:57.258 Write completed with error (sct=0, sc=8) 00:25:57.258 Write completed with error (sct=0, sc=8) 00:25:57.258 Read completed with error (sct=0, sc=8) 00:25:57.258 Read completed with error (sct=0, sc=8) 00:25:57.258 Read completed with error (sct=0, sc=8) 00:25:57.258 Read completed with error (sct=0, sc=8) 00:25:57.258 Write completed with error (sct=0, sc=8) 00:25:57.258 Read completed with error (sct=0, sc=8) 00:25:57.258 Read completed with error (sct=0, sc=8) 00:25:57.258 Write completed with error (sct=0, sc=8) 00:25:57.258 Read completed with error (sct=0, sc=8) 00:25:57.258 Read completed with error (sct=0, sc=8) 00:25:57.258 Read completed with error (sct=0, sc=8) 00:25:57.258 Read completed with error (sct=0, sc=8) 00:25:57.258 Read completed with error (sct=0, sc=8) 00:25:57.258 Read completed with error (sct=0, sc=8) 00:25:57.258 Read completed with error (sct=0, sc=8) 00:25:57.258 [2024-11-20 11:38:42.460457] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0ea0 is same with the state(6) to be set 00:25:57.258 Read completed with error (sct=0, sc=8) 00:25:57.258 Write completed with error (sct=0, sc=8) 00:25:57.258 Write completed with error (sct=0, sc=8) 00:25:57.258 Read completed with error (sct=0, sc=8) 00:25:57.258 Write completed with error (sct=0, sc=8) 00:25:57.258 Read completed with error (sct=0, sc=8) 00:25:57.258 Read completed with error (sct=0, sc=8) 00:25:57.258 Read completed with error (sct=0, sc=8) 00:25:57.258 Read completed with error (sct=0, sc=8) 00:25:57.258 Write completed with error (sct=0, sc=8) 00:25:57.258 Write completed with error (sct=0, sc=8) 00:25:57.258 Write completed with error (sct=0, sc=8) 00:25:57.258 Read completed with error (sct=0, sc=8) 00:25:57.258 Write completed with error (sct=0, sc=8) 00:25:57.258 Write completed with error (sct=0, sc=8) 00:25:57.258 Write completed with error (sct=0, sc=8) 00:25:57.258 Write completed with error (sct=0, sc=8) 00:25:57.258 Read completed with error (sct=0, sc=8) 00:25:57.258 Read completed with error (sct=0, sc=8) 00:25:57.258 Write completed with error (sct=0, sc=8) 00:25:57.258 Read completed with error (sct=0, sc=8) 00:25:57.258 Read completed with error (sct=0, sc=8) 00:25:57.258 Write completed with error (sct=0, sc=8) 00:25:57.258 Write completed with error (sct=0, sc=8) 00:25:57.258 Write completed with error (sct=0, sc=8) 00:25:57.258 Read completed with error (sct=0, sc=8) 00:25:57.258 Read completed with error (sct=0, sc=8) 00:25:57.259 Read completed with error (sct=0, sc=8) 00:25:57.259 Write completed with error (sct=0, sc=8) 00:25:57.259 Read completed with error (sct=0, sc=8) 00:25:57.259 Read completed with error (sct=0, sc=8) 00:25:57.259 Read completed with error (sct=0, sc=8) 00:25:57.259 Write completed with error (sct=0, sc=8) 00:25:57.259 Read completed with error (sct=0, sc=8) 00:25:57.259 Read completed with error (sct=0, sc=8) 00:25:57.259 Read completed with error (sct=0, sc=8) 00:25:57.259 [2024-11-20 11:38:42.461325] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f006c00d020 is same with the state(6) to be set 00:25:57.259 Write completed with error (sct=0, sc=8) 00:25:57.259 Read completed with error (sct=0, sc=8) 00:25:57.259 Write completed with error (sct=0, sc=8) 00:25:57.259 Write completed with error (sct=0, sc=8) 00:25:57.259 Write completed with error (sct=0, sc=8) 00:25:57.259 Write completed with error (sct=0, sc=8) 00:25:57.259 Read completed with error (sct=0, sc=8) 00:25:57.259 Read completed with error (sct=0, sc=8) 00:25:57.259 Read completed with error (sct=0, sc=8) 00:25:57.259 Read completed with error (sct=0, sc=8) 00:25:57.259 Read completed with error (sct=0, sc=8) 00:25:57.259 Read completed with error (sct=0, sc=8) 00:25:57.259 Read completed with error (sct=0, sc=8) 00:25:57.259 Read completed with error (sct=0, sc=8) 00:25:57.259 Read completed with error (sct=0, sc=8) 00:25:57.259 Read completed with error (sct=0, sc=8) 00:25:57.259 Read completed with error (sct=0, sc=8) 00:25:57.259 Read completed with error (sct=0, sc=8) 00:25:57.259 Read completed with error (sct=0, sc=8) 00:25:57.259 Write completed with error (sct=0, sc=8) 00:25:57.259 Write completed with error (sct=0, sc=8) 00:25:57.259 Read completed with error (sct=0, sc=8) 00:25:57.259 Write completed with error (sct=0, sc=8) 00:25:57.259 Read completed with error (sct=0, sc=8) 00:25:57.259 Write completed with error (sct=0, sc=8) 00:25:57.259 Read completed with error (sct=0, sc=8) 00:25:57.259 Read completed with error (sct=0, sc=8) 00:25:57.259 Write completed with error (sct=0, sc=8) 00:25:57.259 Read completed with error (sct=0, sc=8) 00:25:57.259 Write completed with error (sct=0, sc=8) 00:25:57.259 Read completed with error (sct=0, sc=8) 00:25:57.259 Read completed with error (sct=0, sc=8) 00:25:57.259 Read completed with error (sct=0, sc=8) 00:25:57.259 Read completed with error (sct=0, sc=8) 00:25:57.259 Read completed with error (sct=0, sc=8) 00:25:57.259 Read completed with error (sct=0, sc=8) 00:25:57.259 [2024-11-20 11:38:42.461875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f006c00d680 is same with the state(6) to be set 00:25:57.259 Initializing NVMe Controllers 00:25:57.259 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:25:57.259 Controller IO queue size 128, less than required. 00:25:57.259 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:57.259 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:25:57.259 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:25:57.259 Initialization complete. Launching workers. 00:25:57.259 ======================================================== 00:25:57.259 Latency(us) 00:25:57.259 Device Information : IOPS MiB/s Average min max 00:25:57.259 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 172.07 0.08 891225.93 2142.59 1014986.73 00:25:57.259 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 178.02 0.09 919932.70 489.77 1015136.77 00:25:57.259 ======================================================== 00:25:57.259 Total : 350.09 0.17 905823.28 489.77 1015136.77 00:25:57.259 00:25:57.259 [2024-11-20 11:38:42.462477] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b99ee0 (9): Bad file descriptor 00:25:57.259 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:25:57.259 11:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.259 11:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:25:57.259 11:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 100536 00:25:57.259 11:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:25:57.519 11:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:25:57.519 11:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 100536 00:25:57.519 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (100536) - No such process 00:25:57.519 11:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 100536 00:25:57.519 11:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:25:57.519 11:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 100536 00:25:57.519 11:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:25:57.519 11:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:57.519 11:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:25:57.519 11:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:57.519 11:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 100536 00:25:57.519 11:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:25:57.519 11:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:57.519 11:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:57.519 11:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:57.519 11:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:25:57.519 11:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.519 11:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:25:57.519 11:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.519 11:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:25:57.519 11:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.519 11:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:25:57.519 [2024-11-20 11:38:42.985309] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:57.519 11:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.519 11:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:57.519 11:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.519 11:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:25:57.519 11:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.519 11:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=100582 00:25:57.519 11:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:25:57.519 11:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 100582 00:25:57.519 11:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:25:57.519 11:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:25:57.779 [2024-11-20 11:38:43.161524] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:58.039 11:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:25:58.039 11:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 100582 00:25:58.039 11:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:25:58.607 11:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:25:58.607 11:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 100582 00:25:58.607 11:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:25:59.174 11:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:25:59.174 11:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 100582 00:25:59.174 11:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:25:59.432 11:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:25:59.432 11:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 100582 00:25:59.432 11:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:25:59.999 11:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:25:59.999 11:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 100582 00:25:59.999 11:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:26:00.566 11:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:26:00.566 11:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 100582 00:26:00.566 11:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:26:00.825 Initializing NVMe Controllers 00:26:00.825 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:26:00.825 Controller IO queue size 128, less than required. 00:26:00.825 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:00.825 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:26:00.825 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:26:00.825 Initialization complete. Launching workers. 00:26:00.825 ======================================================== 00:26:00.825 Latency(us) 00:26:00.825 Device Information : IOPS MiB/s Average min max 00:26:00.825 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003291.17 1000177.57 1010915.17 00:26:00.825 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005547.45 1000435.31 1013173.52 00:26:00.825 ======================================================== 00:26:00.825 Total : 256.00 0.12 1004419.31 1000177.57 1013173.52 00:26:00.825 00:26:01.084 11:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:26:01.084 11:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 100582 00:26:01.084 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (100582) - No such process 00:26:01.084 11:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 100582 00:26:01.084 11:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:26:01.084 11:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:26:01.084 11:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:01.084 11:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:26:01.084 11:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:01.084 11:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:26:01.084 11:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:01.084 11:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:01.084 rmmod nvme_tcp 00:26:01.084 rmmod nvme_fabrics 00:26:01.084 rmmod nvme_keyring 00:26:01.084 11:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:01.084 11:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:26:01.084 11:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:26:01.084 11:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 100497 ']' 00:26:01.084 11:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 100497 00:26:01.084 11:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 100497 ']' 00:26:01.084 11:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 100497 00:26:01.084 11:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:26:01.084 11:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:01.084 11:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100497 00:26:01.084 11:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:01.084 killing process with pid 100497 00:26:01.084 11:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:01.084 11:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100497' 00:26:01.084 11:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 100497 00:26:01.084 11:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 100497 00:26:01.342 11:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:01.343 11:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:01.343 11:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:01.343 11:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:26:01.343 11:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:26:01.343 11:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:01.343 11:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:26:01.343 11:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:01.343 11:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:26:01.343 11:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:26:01.343 11:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:26:01.343 11:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:26:01.343 11:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:26:01.343 11:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:26:01.343 11:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:26:01.343 11:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:26:01.343 11:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:26:01.343 11:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:26:01.601 11:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:26:01.601 11:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:26:01.601 11:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:01.601 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:01.601 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@246 -- # remove_spdk_ns 00:26:01.601 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:01.601 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:01.602 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:01.602 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@300 -- # return 0 00:26:01.602 00:26:01.602 real 0m8.878s 00:26:01.602 user 0m23.933s 00:26:01.602 sys 0m2.449s 00:26:01.602 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:01.602 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:01.602 ************************************ 00:26:01.602 END TEST nvmf_delete_subsystem 00:26:01.602 ************************************ 00:26:01.602 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:26:01.602 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:01.602 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:01.602 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:26:01.602 ************************************ 00:26:01.602 START TEST nvmf_host_management 00:26:01.602 ************************************ 00:26:01.602 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:26:01.602 * Looking for test storage... 00:26:01.861 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:01.861 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:01.861 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:26:01.861 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:01.861 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:01.861 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:01.862 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:01.862 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:01.862 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:26:01.862 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:26:01.862 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:26:01.862 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:26:01.862 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:26:01.862 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:26:01.862 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:26:01.862 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:01.862 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:26:01.862 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:26:01.862 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:01.862 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:01.862 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:26:01.862 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:26:01.862 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:01.862 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:26:01.862 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:26:01.862 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:26:01.862 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:26:01.862 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:01.862 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:26:01.862 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:26:01.862 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:01.862 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:01.862 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:26:01.862 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:01.862 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:01.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:01.862 --rc genhtml_branch_coverage=1 00:26:01.862 --rc genhtml_function_coverage=1 00:26:01.862 --rc genhtml_legend=1 00:26:01.862 --rc geninfo_all_blocks=1 00:26:01.862 --rc geninfo_unexecuted_blocks=1 00:26:01.862 00:26:01.862 ' 00:26:01.862 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:01.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:01.862 --rc genhtml_branch_coverage=1 00:26:01.862 --rc genhtml_function_coverage=1 00:26:01.862 --rc genhtml_legend=1 00:26:01.862 --rc geninfo_all_blocks=1 00:26:01.862 --rc geninfo_unexecuted_blocks=1 00:26:01.862 00:26:01.862 ' 00:26:01.862 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:01.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:01.862 --rc genhtml_branch_coverage=1 00:26:01.862 --rc genhtml_function_coverage=1 00:26:01.862 --rc genhtml_legend=1 00:26:01.862 --rc geninfo_all_blocks=1 00:26:01.862 --rc geninfo_unexecuted_blocks=1 00:26:01.862 00:26:01.862 ' 00:26:01.862 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:01.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:01.862 --rc genhtml_branch_coverage=1 00:26:01.862 --rc genhtml_function_coverage=1 00:26:01.862 --rc genhtml_legend=1 00:26:01.862 --rc geninfo_all_blocks=1 00:26:01.862 --rc geninfo_unexecuted_blocks=1 00:26:01.862 00:26:01.862 ' 00:26:01.862 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:01.862 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:26:01.862 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:01.862 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:01.862 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:01.862 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:01.862 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:01.862 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:01.862 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:01.862 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:01.862 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:01.862 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:01.862 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:26:01.862 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=806d442c-08ba-4719-b76d-33169283459a 00:26:01.862 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:01.862 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:01.862 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:01.862 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:01.862 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:01.862 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:26:01.862 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:01.862 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:01.862 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:01.862 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.862 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.863 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.863 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:26:01.863 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.863 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:26:01.863 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:01.863 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:01.863 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:01.863 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:01.863 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:01.863 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:01.863 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:01.863 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:01.863 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:01.863 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:01.863 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:01.863 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:01.863 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:26:01.863 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:01.863 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:01.863 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:01.863 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:01.863 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:01.863 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:01.863 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:01.863 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:01.863 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:26:01.863 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:26:01.863 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:26:01.863 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:26:01.863 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:26:01.863 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:26:01.863 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:01.863 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:26:01.863 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:26:01.863 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:01.863 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:01.863 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:26:01.863 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:01.863 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:26:01.863 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:01.863 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:26:01.863 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:01.863 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:01.863 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:01.863 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:01.863 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:01.863 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:01.863 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:26:01.863 Cannot find device "nvmf_init_br" 00:26:01.863 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:26:01.863 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:26:01.863 Cannot find device "nvmf_init_br2" 00:26:01.863 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:26:01.863 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:26:01.863 Cannot find device "nvmf_tgt_br" 00:26:01.863 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:26:01.863 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:26:01.863 Cannot find device "nvmf_tgt_br2" 00:26:01.863 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:26:01.863 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:26:01.863 Cannot find device "nvmf_init_br" 00:26:01.863 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:26:01.863 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:26:01.863 Cannot find device "nvmf_init_br2" 00:26:01.863 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:26:01.863 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:26:01.863 Cannot find device "nvmf_tgt_br" 00:26:01.863 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:26:01.863 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:26:02.122 Cannot find device "nvmf_tgt_br2" 00:26:02.122 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:26:02.122 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:26:02.122 Cannot find device "nvmf_br" 00:26:02.122 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:26:02.122 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:26:02.122 Cannot find device "nvmf_init_if" 00:26:02.122 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:26:02.123 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:26:02.123 Cannot find device "nvmf_init_if2" 00:26:02.123 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:26:02.123 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:02.123 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:02.123 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:26:02.123 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:02.123 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:02.123 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:26:02.123 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:26:02.123 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:02.123 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:26:02.123 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:02.123 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:02.123 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:02.123 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:02.123 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:02.123 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:26:02.123 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:26:02.123 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:26:02.123 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:26:02.123 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:26:02.123 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:26:02.123 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:26:02.123 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:26:02.123 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:26:02.123 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:02.123 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:02.123 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:02.123 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:26:02.123 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:26:02.123 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:26:02.123 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:26:02.123 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:02.123 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:02.123 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:02.123 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:26:02.123 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:26:02.123 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:26:02.123 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:02.123 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:02.123 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:26:02.123 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:02.123 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.112 ms 00:26:02.123 00:26:02.123 --- 10.0.0.3 ping statistics --- 00:26:02.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:02.123 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:26:02.123 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:26:02.123 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:02.123 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:26:02.123 00:26:02.123 --- 10.0.0.4 ping statistics --- 00:26:02.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:02.123 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:26:02.123 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:02.123 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:02.123 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:26:02.123 00:26:02.123 --- 10.0.0.1 ping statistics --- 00:26:02.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:02.123 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:26:02.123 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:26:02.123 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:02.123 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:26:02.123 00:26:02.123 --- 10.0.0.2 ping statistics --- 00:26:02.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:02.123 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:26:02.123 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:02.123 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:26:02.123 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:02.123 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:02.123 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:02.123 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:02.123 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:02.123 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:02.123 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:02.382 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:26:02.382 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:26:02.382 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:26:02.382 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:02.382 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:02.382 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:26:02.382 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=100864 00:26:02.382 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 100864 00:26:02.382 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:26:02.382 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 100864 ']' 00:26:02.382 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:02.382 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:02.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:02.382 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:02.382 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:02.382 11:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:26:02.382 [2024-11-20 11:38:47.794962] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:26:02.382 [2024-11-20 11:38:47.796435] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:26:02.382 [2024-11-20 11:38:47.796505] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:02.382 [2024-11-20 11:38:47.955578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:02.642 [2024-11-20 11:38:47.997753] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:02.642 [2024-11-20 11:38:47.997811] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:02.642 [2024-11-20 11:38:47.997825] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:02.642 [2024-11-20 11:38:47.997836] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:02.642 [2024-11-20 11:38:47.997844] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:02.642 [2024-11-20 11:38:47.998742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:02.642 [2024-11-20 11:38:47.998786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:02.642 [2024-11-20 11:38:47.998873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:26:02.642 [2024-11-20 11:38:47.998879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:02.642 [2024-11-20 11:38:48.054380] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:02.642 [2024-11-20 11:38:48.054569] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:26:02.642 [2024-11-20 11:38:48.055428] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:26:02.642 [2024-11-20 11:38:48.055463] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:26:02.642 [2024-11-20 11:38:48.055548] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:26:02.642 11:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:02.642 11:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:26:02.642 11:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:02.642 11:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:02.642 11:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:26:02.642 11:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:02.642 11:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:02.642 11:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.642 11:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:26:02.642 [2024-11-20 11:38:48.128397] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:02.642 11:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.642 11:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:26:02.642 11:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:02.642 11:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:26:02.642 11:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:26:02.642 11:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:26:02.642 11:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:26:02.642 11:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.642 11:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:26:02.642 Malloc0 00:26:02.642 [2024-11-20 11:38:48.200698] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:02.642 11:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.642 11:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:26:02.642 11:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:02.642 11:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:26:02.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:02.903 11:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=100925 00:26:02.903 11:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 100925 /var/tmp/bdevperf.sock 00:26:02.903 11:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 100925 ']' 00:26:02.903 11:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:02.903 11:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:26:02.903 11:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:02.903 11:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:26:02.903 11:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:02.903 11:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:02.903 11:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:26:02.903 11:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:26:02.904 11:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:26:02.904 11:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:02.904 11:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:02.904 { 00:26:02.904 "params": { 00:26:02.904 "name": "Nvme$subsystem", 00:26:02.904 "trtype": "$TEST_TRANSPORT", 00:26:02.904 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:02.904 "adrfam": "ipv4", 00:26:02.904 "trsvcid": "$NVMF_PORT", 00:26:02.904 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:02.904 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:02.904 "hdgst": ${hdgst:-false}, 00:26:02.904 "ddgst": ${ddgst:-false} 00:26:02.904 }, 00:26:02.904 "method": "bdev_nvme_attach_controller" 00:26:02.904 } 00:26:02.904 EOF 00:26:02.904 )") 00:26:02.904 11:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:26:02.904 11:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:26:02.904 11:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:26:02.904 11:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:02.904 "params": { 00:26:02.904 "name": "Nvme0", 00:26:02.904 "trtype": "tcp", 00:26:02.904 "traddr": "10.0.0.3", 00:26:02.904 "adrfam": "ipv4", 00:26:02.904 "trsvcid": "4420", 00:26:02.904 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:02.904 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:02.904 "hdgst": false, 00:26:02.904 "ddgst": false 00:26:02.904 }, 00:26:02.904 "method": "bdev_nvme_attach_controller" 00:26:02.904 }' 00:26:02.904 [2024-11-20 11:38:48.306517] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:26:02.904 [2024-11-20 11:38:48.306649] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100925 ] 00:26:02.904 [2024-11-20 11:38:48.458564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:03.202 [2024-11-20 11:38:48.516680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:03.202 Running I/O for 10 seconds... 00:26:03.202 11:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:03.202 11:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:26:03.202 11:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:03.202 11:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.202 11:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:26:03.202 11:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.202 11:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:03.202 11:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:26:03.202 11:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:26:03.202 11:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:26:03.202 11:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:26:03.202 11:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:26:03.202 11:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:26:03.202 11:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:26:03.202 11:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:26:03.202 11:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.202 11:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:26:03.202 11:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:26:03.202 11:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.460 11:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:26:03.460 11:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:26:03.460 11:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:26:03.726 11:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:26:03.726 11:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:26:03.726 11:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:26:03.726 11:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.726 11:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:26:03.726 11:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:26:03.726 11:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.726 11:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:26:03.726 11:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:26:03.726 11:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:26:03.726 11:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:26:03.726 11:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:26:03.726 11:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:26:03.726 11:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.726 11:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:26:03.726 [2024-11-20 11:38:49.152215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1364ee0 is same with the state(6) to be set 00:26:03.726 [2024-11-20 11:38:49.152276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1364ee0 is same with the state(6) to be set 00:26:03.726 [2024-11-20 11:38:49.152289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1364ee0 is same with the state(6) to be set 00:26:03.726 [2024-11-20 11:38:49.152297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1364ee0 is same with the state(6) to be set 00:26:03.726 [2024-11-20 11:38:49.152305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1364ee0 is same with the state(6) to be set 00:26:03.726 [2024-11-20 11:38:49.152314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1364ee0 is same with the state(6) to be set 00:26:03.726 [2024-11-20 11:38:49.152322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1364ee0 is same with the state(6) to be set 00:26:03.726 [2024-11-20 11:38:49.152331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1364ee0 is same with the state(6) to be set 00:26:03.726 [2024-11-20 11:38:49.152339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1364ee0 is same with the state(6) to be set 00:26:03.726 [2024-11-20 11:38:49.152347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1364ee0 is same with the state(6) to be set 00:26:03.726 [2024-11-20 11:38:49.152355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1364ee0 is same with the state(6) to be set 00:26:03.726 [2024-11-20 11:38:49.152363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1364ee0 is same with the state(6) to be set 00:26:03.726 [2024-11-20 11:38:49.152371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1364ee0 is same with the state(6) to be set 00:26:03.726 [2024-11-20 11:38:49.152379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1364ee0 is same with the state(6) to be set 00:26:03.726 [2024-11-20 11:38:49.152387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1364ee0 is same with the state(6) to be set 00:26:03.726 [2024-11-20 11:38:49.152395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1364ee0 is same with the state(6) to be set 00:26:03.726 [2024-11-20 11:38:49.152403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1364ee0 is same with the state(6) to be set 00:26:03.726 [2024-11-20 11:38:49.152411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1364ee0 is same with the state(6) to be set 00:26:03.726 [2024-11-20 11:38:49.152419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1364ee0 is same with the state(6) to be set 00:26:03.726 [2024-11-20 11:38:49.152427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1364ee0 is same with the state(6) to be set 00:26:03.726 [2024-11-20 11:38:49.152435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1364ee0 is same with the state(6) to be set 00:26:03.726 [2024-11-20 11:38:49.152443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1364ee0 is same with the state(6) to be set 00:26:03.726 [2024-11-20 11:38:49.152451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1364ee0 is same with the state(6) to be set 00:26:03.726 [2024-11-20 11:38:49.152459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1364ee0 is same with the state(6) to be set 00:26:03.726 [2024-11-20 11:38:49.152467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1364ee0 is same with the state(6) to be set 00:26:03.726 [2024-11-20 11:38:49.152475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1364ee0 is same with the state(6) to be set 00:26:03.726 [2024-11-20 11:38:49.152483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1364ee0 is same with the state(6) to be set 00:26:03.726 [2024-11-20 11:38:49.152491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1364ee0 is same with the state(6) to be set 00:26:03.726 [2024-11-20 11:38:49.152499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1364ee0 is same with the state(6) to be set 00:26:03.726 [2024-11-20 11:38:49.152507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1364ee0 is same with the state(6) to be set 00:26:03.726 [2024-11-20 11:38:49.152516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1364ee0 is same with the state(6) to be set 00:26:03.726 [2024-11-20 11:38:49.152524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1364ee0 is same with the state(6) to be set 00:26:03.726 [2024-11-20 11:38:49.152532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1364ee0 is same with the state(6) to be set 00:26:03.726 [2024-11-20 11:38:49.152541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1364ee0 is same with the state(6) to be set 00:26:03.726 [2024-11-20 11:38:49.152551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1364ee0 is same with the state(6) to be set 00:26:03.726 [2024-11-20 11:38:49.152559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1364ee0 is same with the state(6) to be set 00:26:03.726 [2024-11-20 11:38:49.152567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1364ee0 is same with the state(6) to be set 00:26:03.726 [2024-11-20 11:38:49.152575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1364ee0 is same with the state(6) to be set 00:26:03.726 [2024-11-20 11:38:49.152583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1364ee0 is same with the state(6) to be set 00:26:03.726 [2024-11-20 11:38:49.152591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1364ee0 is same with the state(6) to be set 00:26:03.726 [2024-11-20 11:38:49.152599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1364ee0 is same with the state(6) to be set 00:26:03.726 [2024-11-20 11:38:49.152607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1364ee0 is same with the state(6) to be set 00:26:03.726 [2024-11-20 11:38:49.152630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1364ee0 is same with the state(6) to be set 00:26:03.726 [2024-11-20 11:38:49.152639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1364ee0 is same with the state(6) to be set 00:26:03.726 [2024-11-20 11:38:49.152647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1364ee0 is same with the state(6) to be set 00:26:03.726 [2024-11-20 11:38:49.152655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1364ee0 is same with the state(6) to be set 00:26:03.726 [2024-11-20 11:38:49.152663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1364ee0 is same with the state(6) to be set 00:26:03.726 [2024-11-20 11:38:49.152671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1364ee0 is same with the state(6) to be set 00:26:03.726 [2024-11-20 11:38:49.152679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1364ee0 is same with the state(6) to be set 00:26:03.726 [2024-11-20 11:38:49.152687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1364ee0 is same with the state(6) to be set 00:26:03.726 [2024-11-20 11:38:49.152695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1364ee0 is same with the state(6) to be set 00:26:03.726 [2024-11-20 11:38:49.152703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1364ee0 is same with the state(6) to be set 00:26:03.726 [2024-11-20 11:38:49.152712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1364ee0 is same with the state(6) to be set 00:26:03.726 [2024-11-20 11:38:49.152720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1364ee0 is same with the state(6) to be set 00:26:03.726 [2024-11-20 11:38:49.152728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1364ee0 is same with the state(6) to be set 00:26:03.726 [2024-11-20 11:38:49.152736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1364ee0 is same with the state(6) to be set 00:26:03.726 [2024-11-20 11:38:49.152744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1364ee0 is same with the state(6) to be set 00:26:03.726 [2024-11-20 11:38:49.152752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1364ee0 is same with the state(6) to be set 00:26:03.726 [2024-11-20 11:38:49.152760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1364ee0 is same with the state(6) to be set 00:26:03.726 [2024-11-20 11:38:49.152768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1364ee0 is same with the state(6) to be set 00:26:03.726 [2024-11-20 11:38:49.152776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1364ee0 is same with the state(6) to be set 00:26:03.726 [2024-11-20 11:38:49.152785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1364ee0 is same with the state(6) to be set 00:26:03.726 [2024-11-20 11:38:49.152793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1364ee0 is same with the state(6) to be set 00:26:03.726 11:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.727 11:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:26:03.727 11:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.727 11:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:26:03.727 [2024-11-20 11:38:49.166255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:03.727 [2024-11-20 11:38:49.166304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.727 [2024-11-20 11:38:49.166320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:03.727 [2024-11-20 11:38:49.166330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.727 [2024-11-20 11:38:49.166341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:03.727 [2024-11-20 11:38:49.166350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.727 [2024-11-20 11:38:49.166360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:03.727 [2024-11-20 11:38:49.166370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.727 [2024-11-20 11:38:49.166379] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214e660 is same with the state(6) to be set 00:26:03.727 11:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.727 11:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:26:03.727 [2024-11-20 11:38:49.175472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.727 [2024-11-20 11:38:49.175514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.727 [2024-11-20 11:38:49.175537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.727 [2024-11-20 11:38:49.175550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.727 [2024-11-20 11:38:49.175562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.727 [2024-11-20 11:38:49.175572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.727 [2024-11-20 11:38:49.175583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.727 [2024-11-20 11:38:49.175592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.727 [2024-11-20 11:38:49.175606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.727 [2024-11-20 11:38:49.175629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.727 [2024-11-20 11:38:49.175643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.727 [2024-11-20 11:38:49.175653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.727 [2024-11-20 11:38:49.175665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.727 [2024-11-20 11:38:49.175674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.727 [2024-11-20 11:38:49.175686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.727 [2024-11-20 11:38:49.175696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.727 [2024-11-20 11:38:49.175708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.727 [2024-11-20 11:38:49.175717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.727 [2024-11-20 11:38:49.175728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.727 [2024-11-20 11:38:49.175738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.727 [2024-11-20 11:38:49.175749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.727 [2024-11-20 11:38:49.175758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.727 [2024-11-20 11:38:49.175770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.727 [2024-11-20 11:38:49.175779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.727 [2024-11-20 11:38:49.175790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.727 [2024-11-20 11:38:49.175799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.727 [2024-11-20 11:38:49.175811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.727 [2024-11-20 11:38:49.175820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.727 [2024-11-20 11:38:49.175832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.727 [2024-11-20 11:38:49.175841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.727 [2024-11-20 11:38:49.175852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.727 [2024-11-20 11:38:49.175861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.727 [2024-11-20 11:38:49.175873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.727 [2024-11-20 11:38:49.175883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.727 [2024-11-20 11:38:49.175894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.727 [2024-11-20 11:38:49.175904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.727 [2024-11-20 11:38:49.175915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.727 [2024-11-20 11:38:49.175925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.727 [2024-11-20 11:38:49.175937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.727 [2024-11-20 11:38:49.175946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.727 [2024-11-20 11:38:49.175958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.727 [2024-11-20 11:38:49.175967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.727 [2024-11-20 11:38:49.175979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.727 [2024-11-20 11:38:49.175988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.727 [2024-11-20 11:38:49.175999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.727 [2024-11-20 11:38:49.176008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.727 [2024-11-20 11:38:49.176020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.727 [2024-11-20 11:38:49.176029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.727 [2024-11-20 11:38:49.176040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.727 [2024-11-20 11:38:49.176050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.727 [2024-11-20 11:38:49.176062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.727 [2024-11-20 11:38:49.176071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.727 [2024-11-20 11:38:49.176082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.727 [2024-11-20 11:38:49.176091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.727 [2024-11-20 11:38:49.176103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.727 [2024-11-20 11:38:49.176112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.727 [2024-11-20 11:38:49.176123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.727 [2024-11-20 11:38:49.176132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.727 [2024-11-20 11:38:49.176144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.727 [2024-11-20 11:38:49.176153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.727 [2024-11-20 11:38:49.176165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.727 [2024-11-20 11:38:49.176174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.727 [2024-11-20 11:38:49.176192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.727 [2024-11-20 11:38:49.176201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.727 [2024-11-20 11:38:49.176212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.727 [2024-11-20 11:38:49.176221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.727 [2024-11-20 11:38:49.176233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.728 [2024-11-20 11:38:49.176242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.728 [2024-11-20 11:38:49.176253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.728 [2024-11-20 11:38:49.176262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.728 [2024-11-20 11:38:49.176274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.728 [2024-11-20 11:38:49.176283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.728 [2024-11-20 11:38:49.176296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.728 [2024-11-20 11:38:49.176305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.728 [2024-11-20 11:38:49.176316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.728 [2024-11-20 11:38:49.176326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.728 [2024-11-20 11:38:49.176337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.728 [2024-11-20 11:38:49.176346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.728 [2024-11-20 11:38:49.176357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.728 [2024-11-20 11:38:49.176366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.728 [2024-11-20 11:38:49.176380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.728 [2024-11-20 11:38:49.176389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.728 [2024-11-20 11:38:49.176400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.728 [2024-11-20 11:38:49.176410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.728 [2024-11-20 11:38:49.176421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.728 [2024-11-20 11:38:49.176430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.728 [2024-11-20 11:38:49.176442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.728 [2024-11-20 11:38:49.176451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.728 [2024-11-20 11:38:49.176462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.728 [2024-11-20 11:38:49.176472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.728 [2024-11-20 11:38:49.176483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.728 [2024-11-20 11:38:49.176492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.728 [2024-11-20 11:38:49.176504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.728 [2024-11-20 11:38:49.176513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.728 [2024-11-20 11:38:49.176524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.728 [2024-11-20 11:38:49.176533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.728 [2024-11-20 11:38:49.176544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.728 [2024-11-20 11:38:49.176554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.728 [2024-11-20 11:38:49.176566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.728 [2024-11-20 11:38:49.176575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.728 [2024-11-20 11:38:49.176586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.728 [2024-11-20 11:38:49.176596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.728 [2024-11-20 11:38:49.176607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.728 [2024-11-20 11:38:49.176627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.728 [2024-11-20 11:38:49.176639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.728 [2024-11-20 11:38:49.176649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.728 [2024-11-20 11:38:49.176660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.728 [2024-11-20 11:38:49.176669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.728 [2024-11-20 11:38:49.176682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.728 [2024-11-20 11:38:49.176691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.728 [2024-11-20 11:38:49.176702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.728 [2024-11-20 11:38:49.176711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.728 [2024-11-20 11:38:49.176723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.728 [2024-11-20 11:38:49.176733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.728 [2024-11-20 11:38:49.176744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.728 [2024-11-20 11:38:49.176753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.728 [2024-11-20 11:38:49.176764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.728 [2024-11-20 11:38:49.176773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.728 [2024-11-20 11:38:49.176785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.728 [2024-11-20 11:38:49.176794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.728 [2024-11-20 11:38:49.176805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.728 [2024-11-20 11:38:49.176814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.728 [2024-11-20 11:38:49.176825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.728 [2024-11-20 11:38:49.176835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.728 [2024-11-20 11:38:49.176846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.728 [2024-11-20 11:38:49.176856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.728 [2024-11-20 11:38:49.176867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.728 [2024-11-20 11:38:49.176876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.728 [2024-11-20 11:38:49.176978] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x214e660 (9): Bad file descriptor 00:26:03.728 [2024-11-20 11:38:49.178132] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:03.728 task offset: 81280 on job bdev=Nvme0n1 fails 00:26:03.728 00:26:03.728 Latency(us) 00:26:03.728 [2024-11-20T11:38:49.317Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:03.728 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:03.728 Job: Nvme0n1 ended in about 0.50 seconds with error 00:26:03.728 Verification LBA range: start 0x0 length 0x400 00:26:03.728 Nvme0n1 : 0.50 1269.63 79.35 127.96 0.00 44114.01 1884.16 42419.67 00:26:03.728 [2024-11-20T11:38:49.317Z] =================================================================================================================== 00:26:03.728 [2024-11-20T11:38:49.317Z] Total : 1269.63 79.35 127.96 0.00 44114.01 1884.16 42419.67 00:26:03.728 [2024-11-20 11:38:49.180202] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:03.728 [2024-11-20 11:38:49.183057] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:26:04.669 11:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 100925 00:26:04.669 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (100925) - No such process 00:26:04.669 11:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:26:04.669 11:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:26:04.669 11:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:26:04.669 11:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:26:04.669 11:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:26:04.669 11:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:26:04.669 11:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:04.669 11:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:04.669 { 00:26:04.669 "params": { 00:26:04.669 "name": "Nvme$subsystem", 00:26:04.669 "trtype": "$TEST_TRANSPORT", 00:26:04.669 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:04.669 "adrfam": "ipv4", 00:26:04.669 "trsvcid": "$NVMF_PORT", 00:26:04.669 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:04.669 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:04.669 "hdgst": ${hdgst:-false}, 00:26:04.669 "ddgst": ${ddgst:-false} 00:26:04.669 }, 00:26:04.669 "method": "bdev_nvme_attach_controller" 00:26:04.669 } 00:26:04.669 EOF 00:26:04.669 )") 00:26:04.669 11:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:26:04.669 11:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:26:04.669 11:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:26:04.669 11:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:04.669 "params": { 00:26:04.669 "name": "Nvme0", 00:26:04.669 "trtype": "tcp", 00:26:04.669 "traddr": "10.0.0.3", 00:26:04.669 "adrfam": "ipv4", 00:26:04.669 "trsvcid": "4420", 00:26:04.669 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:04.670 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:04.670 "hdgst": false, 00:26:04.670 "ddgst": false 00:26:04.670 }, 00:26:04.670 "method": "bdev_nvme_attach_controller" 00:26:04.670 }' 00:26:04.670 [2024-11-20 11:38:50.225919] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:26:04.670 [2024-11-20 11:38:50.226019] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100966 ] 00:26:04.928 [2024-11-20 11:38:50.379242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:04.928 [2024-11-20 11:38:50.420149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:05.186 Running I/O for 1 seconds... 00:26:06.121 1472.00 IOPS, 92.00 MiB/s 00:26:06.121 Latency(us) 00:26:06.121 [2024-11-20T11:38:51.710Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:06.121 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:06.121 Verification LBA range: start 0x0 length 0x400 00:26:06.121 Nvme0n1 : 1.02 1508.88 94.30 0.00 0.00 41461.34 5391.83 42181.35 00:26:06.121 [2024-11-20T11:38:51.710Z] =================================================================================================================== 00:26:06.121 [2024-11-20T11:38:51.710Z] Total : 1508.88 94.30 0.00 0.00 41461.34 5391.83 42181.35 00:26:06.379 11:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:26:06.379 11:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:26:06.379 11:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:26:06.379 11:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:26:06.379 11:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:26:06.380 11:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:06.380 11:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:26:06.380 11:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:06.380 11:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:26:06.380 11:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:06.380 11:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:06.380 rmmod nvme_tcp 00:26:06.380 rmmod nvme_fabrics 00:26:06.380 rmmod nvme_keyring 00:26:06.380 11:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:06.380 11:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:26:06.380 11:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:26:06.380 11:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 100864 ']' 00:26:06.380 11:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 100864 00:26:06.380 11:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 100864 ']' 00:26:06.380 11:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 100864 00:26:06.380 11:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:26:06.380 11:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:06.380 11:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100864 00:26:06.380 11:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:06.380 11:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:06.380 killing process with pid 100864 00:26:06.380 11:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100864' 00:26:06.380 11:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 100864 00:26:06.380 11:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 100864 00:26:06.638 [2024-11-20 11:38:51.993873] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:26:06.638 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:06.638 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:06.638 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:06.638 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:26:06.638 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:26:06.638 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:06.638 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:26:06.638 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:06.638 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:26:06.638 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:26:06.638 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:26:06.638 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:26:06.638 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:26:06.638 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:26:06.638 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:26:06.638 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:26:06.638 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:26:06.638 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:26:06.638 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:26:06.638 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:26:06.638 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:06.897 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:06.897 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:26:06.897 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:06.897 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:06.897 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:06.897 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:26:06.897 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:26:06.897 00:26:06.897 real 0m5.183s 00:26:06.897 user 0m16.750s 00:26:06.897 sys 0m2.317s 00:26:06.897 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:06.897 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:26:06.897 ************************************ 00:26:06.897 END TEST nvmf_host_management 00:26:06.897 ************************************ 00:26:06.897 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:26:06.897 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:06.897 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:06.897 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:26:06.897 ************************************ 00:26:06.897 START TEST nvmf_lvol 00:26:06.897 ************************************ 00:26:06.897 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:26:06.897 * Looking for test storage... 00:26:06.897 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:06.897 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:06.897 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:26:06.897 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:07.157 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:07.157 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:07.157 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:07.157 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:07.157 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:26:07.157 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:26:07.157 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:26:07.157 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:26:07.157 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:26:07.157 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:26:07.157 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:26:07.157 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:07.157 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:26:07.157 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:26:07.157 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:07.157 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:07.157 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:26:07.157 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:26:07.157 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:07.157 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:26:07.157 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:26:07.157 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:26:07.157 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:26:07.157 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:07.157 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:26:07.157 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:26:07.157 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:07.157 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:07.157 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:26:07.157 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:07.157 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:07.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:07.157 --rc genhtml_branch_coverage=1 00:26:07.157 --rc genhtml_function_coverage=1 00:26:07.157 --rc genhtml_legend=1 00:26:07.157 --rc geninfo_all_blocks=1 00:26:07.157 --rc geninfo_unexecuted_blocks=1 00:26:07.157 00:26:07.157 ' 00:26:07.157 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:07.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:07.157 --rc genhtml_branch_coverage=1 00:26:07.157 --rc genhtml_function_coverage=1 00:26:07.157 --rc genhtml_legend=1 00:26:07.157 --rc geninfo_all_blocks=1 00:26:07.157 --rc geninfo_unexecuted_blocks=1 00:26:07.157 00:26:07.157 ' 00:26:07.157 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:07.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:07.157 --rc genhtml_branch_coverage=1 00:26:07.157 --rc genhtml_function_coverage=1 00:26:07.157 --rc genhtml_legend=1 00:26:07.157 --rc geninfo_all_blocks=1 00:26:07.157 --rc geninfo_unexecuted_blocks=1 00:26:07.157 00:26:07.157 ' 00:26:07.157 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:07.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:07.157 --rc genhtml_branch_coverage=1 00:26:07.157 --rc genhtml_function_coverage=1 00:26:07.157 --rc genhtml_legend=1 00:26:07.157 --rc geninfo_all_blocks=1 00:26:07.157 --rc geninfo_unexecuted_blocks=1 00:26:07.157 00:26:07.157 ' 00:26:07.157 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:07.157 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:26:07.157 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:07.157 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:07.157 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:07.157 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:07.157 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:07.157 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:07.157 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:07.157 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:07.157 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:07.157 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:07.158 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:26:07.158 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=806d442c-08ba-4719-b76d-33169283459a 00:26:07.158 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:07.158 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:07.158 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:07.158 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:07.158 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:07.158 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:26:07.158 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:07.158 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:07.158 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:07.158 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.158 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.158 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.158 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:26:07.158 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.158 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:26:07.158 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:07.158 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:07.158 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:07.158 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:07.158 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:07.158 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:07.158 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:07.158 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:07.158 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:07.158 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:07.158 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:07.158 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:07.158 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:26:07.158 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:26:07.158 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:07.158 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:26:07.158 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:07.158 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:07.158 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:07.158 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:07.158 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:07.158 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:07.158 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:07.158 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:07.158 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:26:07.158 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:26:07.158 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:26:07.158 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:26:07.158 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:26:07.158 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:26:07.158 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:07.158 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:26:07.158 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:26:07.158 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:07.158 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:07.158 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:26:07.158 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:07.158 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:26:07.158 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:07.159 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:26:07.159 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:07.159 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:07.159 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:07.159 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:07.159 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:07.159 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:07.159 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:26:07.159 Cannot find device "nvmf_init_br" 00:26:07.159 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:26:07.159 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:26:07.159 Cannot find device "nvmf_init_br2" 00:26:07.159 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:26:07.159 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:26:07.159 Cannot find device "nvmf_tgt_br" 00:26:07.159 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:26:07.159 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:26:07.159 Cannot find device "nvmf_tgt_br2" 00:26:07.159 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:26:07.159 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:26:07.159 Cannot find device "nvmf_init_br" 00:26:07.159 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:26:07.159 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:26:07.159 Cannot find device "nvmf_init_br2" 00:26:07.159 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:26:07.159 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:26:07.159 Cannot find device "nvmf_tgt_br" 00:26:07.159 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:26:07.159 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:26:07.159 Cannot find device "nvmf_tgt_br2" 00:26:07.159 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:26:07.159 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:26:07.159 Cannot find device "nvmf_br" 00:26:07.159 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:26:07.159 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:26:07.159 Cannot find device "nvmf_init_if" 00:26:07.159 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:26:07.159 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:26:07.159 Cannot find device "nvmf_init_if2" 00:26:07.159 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:26:07.159 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:07.159 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:07.159 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:26:07.159 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:07.159 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:07.159 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:26:07.159 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:26:07.159 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:07.159 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:26:07.159 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:07.159 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:07.159 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:07.159 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:07.159 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:07.159 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:26:07.159 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:26:07.159 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:26:07.159 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:26:07.159 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:26:07.419 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:26:07.419 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:26:07.419 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:26:07.419 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:26:07.419 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:07.419 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:07.419 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:07.419 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:26:07.419 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:26:07.419 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:26:07.419 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:26:07.419 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:07.419 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:07.419 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:07.419 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:26:07.419 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:26:07.419 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:26:07.419 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:07.419 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:07.419 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:26:07.419 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:07.419 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.085 ms 00:26:07.419 00:26:07.419 --- 10.0.0.3 ping statistics --- 00:26:07.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:07.419 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:26:07.419 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:26:07.419 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:07.419 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:26:07.419 00:26:07.419 --- 10.0.0.4 ping statistics --- 00:26:07.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:07.419 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:26:07.419 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:07.419 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:07.419 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:26:07.419 00:26:07.419 --- 10.0.0.1 ping statistics --- 00:26:07.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:07.419 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:26:07.419 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:26:07.419 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:07.419 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.098 ms 00:26:07.419 00:26:07.419 --- 10.0.0.2 ping statistics --- 00:26:07.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:07.419 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:26:07.419 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:07.419 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:26:07.420 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:07.420 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:07.420 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:07.420 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:07.420 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:07.420 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:07.420 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:07.420 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:26:07.420 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:07.420 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:07.420 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:26:07.420 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=101223 00:26:07.420 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:26:07.420 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 101223 00:26:07.420 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 101223 ']' 00:26:07.420 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:07.420 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:07.420 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:07.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:07.420 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:07.420 11:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:26:07.420 [2024-11-20 11:38:52.979157] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:26:07.420 [2024-11-20 11:38:52.980385] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:26:07.420 [2024-11-20 11:38:52.980459] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:07.679 [2024-11-20 11:38:53.131208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:07.679 [2024-11-20 11:38:53.169307] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:07.679 [2024-11-20 11:38:53.169366] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:07.679 [2024-11-20 11:38:53.169380] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:07.679 [2024-11-20 11:38:53.169390] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:07.679 [2024-11-20 11:38:53.169400] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:07.679 [2024-11-20 11:38:53.170247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:07.679 [2024-11-20 11:38:53.170323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:07.679 [2024-11-20 11:38:53.170332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:07.679 [2024-11-20 11:38:53.225152] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:07.679 [2024-11-20 11:38:53.226213] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:26:07.679 [2024-11-20 11:38:53.226220] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:26:07.679 [2024-11-20 11:38:53.226219] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:26:07.679 11:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:07.679 11:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:26:07.679 11:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:07.679 11:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:07.679 11:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:26:07.938 11:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:07.938 11:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:08.197 [2024-11-20 11:38:53.543770] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:08.197 11:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:26:08.456 11:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:26:08.456 11:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:26:08.714 11:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:26:08.714 11:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:26:09.281 11:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:26:09.539 11:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=d8961bb3-f66f-4411-b91f-41bdb9b384ab 00:26:09.539 11:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u d8961bb3-f66f-4411-b91f-41bdb9b384ab lvol 20 00:26:09.798 11:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=5722abad-38f6-44b7-946e-713ee487cb76 00:26:09.798 11:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:26:10.057 11:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5722abad-38f6-44b7-946e-713ee487cb76 00:26:10.314 11:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:26:10.572 [2024-11-20 11:38:56.019838] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:10.572 11:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:26:10.831 11:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=101362 00:26:10.831 11:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:26:10.831 11:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:26:11.765 11:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 5722abad-38f6-44b7-946e-713ee487cb76 MY_SNAPSHOT 00:26:12.329 11:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=046d11c3-e292-4af8-ac3d-af59b420afee 00:26:12.329 11:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 5722abad-38f6-44b7-946e-713ee487cb76 30 00:26:12.587 11:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 046d11c3-e292-4af8-ac3d-af59b420afee MY_CLONE 00:26:13.154 11:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=b509238d-b633-4697-b524-5b37f09c8c38 00:26:13.154 11:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate b509238d-b633-4697-b524-5b37f09c8c38 00:26:13.721 11:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 101362 00:26:21.938 Initializing NVMe Controllers 00:26:21.938 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:26:21.938 Controller IO queue size 128, less than required. 00:26:21.938 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:21.938 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:26:21.938 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:26:21.938 Initialization complete. Launching workers. 00:26:21.938 ======================================================== 00:26:21.938 Latency(us) 00:26:21.938 Device Information : IOPS MiB/s Average min max 00:26:21.938 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10269.50 40.12 12466.80 2006.79 74270.31 00:26:21.938 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10254.60 40.06 12483.40 344.33 68449.96 00:26:21.938 ======================================================== 00:26:21.938 Total : 20524.10 80.17 12475.10 344.33 74270.31 00:26:21.938 00:26:21.938 11:39:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:21.938 11:39:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 5722abad-38f6-44b7-946e-713ee487cb76 00:26:21.938 11:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d8961bb3-f66f-4411-b91f-41bdb9b384ab 00:26:21.938 11:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:26:21.938 11:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:26:21.938 11:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:26:21.938 11:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:21.938 11:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:26:22.197 11:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:22.197 11:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:26:22.197 11:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:22.197 11:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:22.197 rmmod nvme_tcp 00:26:22.197 rmmod nvme_fabrics 00:26:22.197 rmmod nvme_keyring 00:26:22.197 11:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:22.197 11:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:26:22.197 11:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:26:22.197 11:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 101223 ']' 00:26:22.197 11:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 101223 00:26:22.197 11:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 101223 ']' 00:26:22.197 11:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 101223 00:26:22.197 11:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:26:22.197 11:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:22.197 11:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 101223 00:26:22.197 killing process with pid 101223 00:26:22.197 11:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:22.197 11:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:22.197 11:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 101223' 00:26:22.197 11:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 101223 00:26:22.198 11:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 101223 00:26:22.457 11:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:22.457 11:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:22.457 11:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:22.457 11:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:26:22.457 11:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:22.457 11:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:26:22.457 11:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:26:22.457 11:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:22.457 11:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:26:22.457 11:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:26:22.457 11:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:26:22.457 11:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:26:22.457 11:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:26:22.457 11:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:26:22.457 11:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:26:22.457 11:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:26:22.457 11:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:26:22.457 11:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:26:22.457 11:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:26:22.457 11:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:26:22.457 11:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:22.457 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:22.457 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:26:22.457 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:22.457 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:22.457 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:22.715 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:26:22.715 ************************************ 00:26:22.715 END TEST nvmf_lvol 00:26:22.715 ************************************ 00:26:22.715 00:26:22.715 real 0m15.730s 00:26:22.715 user 0m56.433s 00:26:22.715 sys 0m5.742s 00:26:22.715 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:22.715 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:26:22.715 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:26:22.715 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:22.715 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:22.715 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:26:22.715 ************************************ 00:26:22.715 START TEST nvmf_lvs_grow 00:26:22.715 ************************************ 00:26:22.715 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:26:22.715 * Looking for test storage... 00:26:22.715 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:22.716 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:22.716 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:26:22.716 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:22.716 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:22.716 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:22.716 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:22.716 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:22.716 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:26:22.716 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:26:22.716 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:26:22.716 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:26:22.716 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:26:22.716 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:26:22.716 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:26:22.716 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:22.716 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:26:22.716 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:26:22.716 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:22.716 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:22.716 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:26:22.716 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:26:22.716 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:22.716 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:26:22.716 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:26:22.716 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:26:22.716 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:26:22.716 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:22.716 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:26:22.716 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:26:22.716 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:22.716 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:22.716 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:26:22.716 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:22.716 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:22.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:22.716 --rc genhtml_branch_coverage=1 00:26:22.716 --rc genhtml_function_coverage=1 00:26:22.716 --rc genhtml_legend=1 00:26:22.716 --rc geninfo_all_blocks=1 00:26:22.716 --rc geninfo_unexecuted_blocks=1 00:26:22.716 00:26:22.716 ' 00:26:22.716 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:22.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:22.716 --rc genhtml_branch_coverage=1 00:26:22.716 --rc genhtml_function_coverage=1 00:26:22.716 --rc genhtml_legend=1 00:26:22.716 --rc geninfo_all_blocks=1 00:26:22.716 --rc geninfo_unexecuted_blocks=1 00:26:22.716 00:26:22.716 ' 00:26:22.716 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:22.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:22.716 --rc genhtml_branch_coverage=1 00:26:22.716 --rc genhtml_function_coverage=1 00:26:22.716 --rc genhtml_legend=1 00:26:22.716 --rc geninfo_all_blocks=1 00:26:22.716 --rc geninfo_unexecuted_blocks=1 00:26:22.716 00:26:22.716 ' 00:26:22.716 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:22.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:22.716 --rc genhtml_branch_coverage=1 00:26:22.716 --rc genhtml_function_coverage=1 00:26:22.716 --rc genhtml_legend=1 00:26:22.716 --rc geninfo_all_blocks=1 00:26:22.716 --rc geninfo_unexecuted_blocks=1 00:26:22.716 00:26:22.716 ' 00:26:22.716 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:22.716 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:26:22.975 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:22.975 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:22.975 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:22.975 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:22.975 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:22.975 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:22.975 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:22.975 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:22.976 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:22.976 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:22.976 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:26:22.976 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=806d442c-08ba-4719-b76d-33169283459a 00:26:22.976 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:22.976 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:22.976 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:22.976 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:22.976 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:22.976 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:26:22.976 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:22.976 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:22.976 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:22.976 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.976 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.976 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.976 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:26:22.976 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.976 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:26:22.976 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:22.976 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:22.976 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:22.976 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:22.976 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:22.976 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:22.976 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:22.976 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:22.976 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:22.976 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:22.976 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:22.976 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:22.976 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:26:22.976 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:22.976 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:22.976 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:22.976 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:22.976 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:22.976 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:22.976 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:22.976 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:22.976 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:26:22.976 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:26:22.976 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:26:22.976 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:26:22.976 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:26:22.976 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:26:22.976 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:22.976 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:26:22.976 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:26:22.976 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:22.976 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:22.976 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:26:22.976 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:22.976 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:26:22.976 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:22.976 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:26:22.976 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:22.976 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:22.976 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:22.976 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:22.976 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:22.976 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:22.976 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:26:22.976 Cannot find device "nvmf_init_br" 00:26:22.976 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:26:22.976 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:26:22.976 Cannot find device "nvmf_init_br2" 00:26:22.976 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:26:22.976 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:26:22.976 Cannot find device "nvmf_tgt_br" 00:26:22.976 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:26:22.976 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:26:22.976 Cannot find device "nvmf_tgt_br2" 00:26:22.977 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:26:22.977 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:26:22.977 Cannot find device "nvmf_init_br" 00:26:22.977 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:26:22.977 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:26:22.977 Cannot find device "nvmf_init_br2" 00:26:22.977 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:26:22.977 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:26:22.977 Cannot find device "nvmf_tgt_br" 00:26:22.977 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:26:22.977 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:26:22.977 Cannot find device "nvmf_tgt_br2" 00:26:22.977 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:26:22.977 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:26:22.977 Cannot find device "nvmf_br" 00:26:22.977 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:26:22.977 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:26:22.977 Cannot find device "nvmf_init_if" 00:26:22.977 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:26:22.977 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:26:22.977 Cannot find device "nvmf_init_if2" 00:26:22.977 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:26:22.977 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:22.977 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:22.977 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:26:22.977 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:22.977 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:22.977 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:26:22.977 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:26:22.977 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:22.977 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:26:22.977 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:22.977 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:22.977 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:22.977 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:22.977 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:22.977 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:26:22.977 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:26:22.977 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:26:22.977 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:26:22.977 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:26:22.977 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:26:22.977 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:26:22.977 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:26:23.236 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:26:23.236 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:23.236 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:23.236 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:23.236 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:26:23.236 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:26:23.236 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:26:23.236 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:26:23.236 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:23.236 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:23.236 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:23.236 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:26:23.236 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:26:23.236 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:26:23.236 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:23.236 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:23.236 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:26:23.236 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:23.236 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:26:23.236 00:26:23.236 --- 10.0.0.3 ping statistics --- 00:26:23.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:23.236 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:26:23.236 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:26:23.236 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:23.236 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:26:23.236 00:26:23.236 --- 10.0.0.4 ping statistics --- 00:26:23.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:23.236 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:26:23.236 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:23.236 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:23.236 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:26:23.236 00:26:23.236 --- 10.0.0.1 ping statistics --- 00:26:23.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:23.236 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:26:23.236 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:26:23.236 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:23.236 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:26:23.236 00:26:23.236 --- 10.0.0.2 ping statistics --- 00:26:23.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:23.236 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:26:23.236 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:23.236 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:26:23.236 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:23.236 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:23.236 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:23.236 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:23.236 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:23.236 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:23.236 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:23.236 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:26:23.236 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:23.236 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:23.236 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:26:23.236 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=101771 00:26:23.236 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 101771 00:26:23.237 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:26:23.237 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 101771 ']' 00:26:23.237 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:23.237 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:23.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:23.237 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:23.237 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:23.237 11:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:26:23.237 [2024-11-20 11:39:08.767087] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:26:23.237 [2024-11-20 11:39:08.768147] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:26:23.237 [2024-11-20 11:39:08.768212] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:23.500 [2024-11-20 11:39:08.918205] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:23.500 [2024-11-20 11:39:08.955412] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:23.500 [2024-11-20 11:39:08.955695] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:23.500 [2024-11-20 11:39:08.955806] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:23.500 [2024-11-20 11:39:08.955917] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:23.500 [2024-11-20 11:39:08.956005] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:23.500 [2024-11-20 11:39:08.956426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:23.500 [2024-11-20 11:39:09.010473] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:23.500 [2024-11-20 11:39:09.011078] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:26:23.500 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:23.500 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:26:23.500 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:23.500 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:23.500 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:26:23.500 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:23.500 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:24.076 [2024-11-20 11:39:09.381191] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:24.076 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:26:24.076 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:24.076 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:24.076 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:26:24.076 ************************************ 00:26:24.076 START TEST lvs_grow_clean 00:26:24.076 ************************************ 00:26:24.076 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:26:24.076 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:26:24.076 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:26:24.076 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:26:24.076 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:26:24.076 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:26:24.076 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:26:24.076 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:26:24.076 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:26:24.076 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:26:24.335 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:26:24.335 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:26:24.594 11:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=833df456-539a-486f-a655-40af418cf47e 00:26:24.594 11:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:26:24.594 11:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 833df456-539a-486f-a655-40af418cf47e 00:26:24.853 11:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:26:24.853 11:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:26:24.853 11:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 833df456-539a-486f-a655-40af418cf47e lvol 150 00:26:25.112 11:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=2439c5a2-0a75-4477-ae77-97c6899f2cd4 00:26:25.112 11:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:26:25.112 11:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:26:25.680 [2024-11-20 11:39:10.965138] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:26:25.680 [2024-11-20 11:39:10.965294] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:26:25.680 true 00:26:25.680 11:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 833df456-539a-486f-a655-40af418cf47e 00:26:25.680 11:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:26:25.938 11:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:26:25.938 11:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:26:26.197 11:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2439c5a2-0a75-4477-ae77-97c6899f2cd4 00:26:26.456 11:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:26:26.715 [2024-11-20 11:39:12.177591] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:26.715 11:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:26:26.973 11:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=101918 00:26:26.973 11:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:26:26.973 11:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:26.973 11:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 101918 /var/tmp/bdevperf.sock 00:26:26.973 11:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 101918 ']' 00:26:26.973 11:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:26.973 11:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:26.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:26.973 11:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:26.973 11:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:26.973 11:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:26:26.973 [2024-11-20 11:39:12.516757] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:26:26.973 [2024-11-20 11:39:12.516845] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101918 ] 00:26:27.234 [2024-11-20 11:39:12.666338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:27.234 [2024-11-20 11:39:12.705531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:27.234 11:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:27.234 11:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:26:27.234 11:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:26:27.802 Nvme0n1 00:26:27.802 11:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:26:28.062 [ 00:26:28.062 { 00:26:28.062 "aliases": [ 00:26:28.062 "2439c5a2-0a75-4477-ae77-97c6899f2cd4" 00:26:28.062 ], 00:26:28.062 "assigned_rate_limits": { 00:26:28.062 "r_mbytes_per_sec": 0, 00:26:28.062 "rw_ios_per_sec": 0, 00:26:28.062 "rw_mbytes_per_sec": 0, 00:26:28.062 "w_mbytes_per_sec": 0 00:26:28.062 }, 00:26:28.062 "block_size": 4096, 00:26:28.062 "claimed": false, 00:26:28.062 "driver_specific": { 00:26:28.062 "mp_policy": "active_passive", 00:26:28.062 "nvme": [ 00:26:28.062 { 00:26:28.062 "ctrlr_data": { 00:26:28.062 "ana_reporting": false, 00:26:28.062 "cntlid": 1, 00:26:28.062 "firmware_revision": "25.01", 00:26:28.062 "model_number": "SPDK bdev Controller", 00:26:28.062 "multi_ctrlr": true, 00:26:28.062 "oacs": { 00:26:28.062 "firmware": 0, 00:26:28.062 "format": 0, 00:26:28.062 "ns_manage": 0, 00:26:28.062 "security": 0 00:26:28.062 }, 00:26:28.062 "serial_number": "SPDK0", 00:26:28.062 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:28.062 "vendor_id": "0x8086" 00:26:28.062 }, 00:26:28.062 "ns_data": { 00:26:28.062 "can_share": true, 00:26:28.062 "id": 1 00:26:28.062 }, 00:26:28.062 "trid": { 00:26:28.062 "adrfam": "IPv4", 00:26:28.062 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:28.062 "traddr": "10.0.0.3", 00:26:28.062 "trsvcid": "4420", 00:26:28.062 "trtype": "TCP" 00:26:28.062 }, 00:26:28.062 "vs": { 00:26:28.062 "nvme_version": "1.3" 00:26:28.062 } 00:26:28.062 } 00:26:28.062 ] 00:26:28.062 }, 00:26:28.062 "memory_domains": [ 00:26:28.062 { 00:26:28.062 "dma_device_id": "system", 00:26:28.062 "dma_device_type": 1 00:26:28.062 } 00:26:28.062 ], 00:26:28.062 "name": "Nvme0n1", 00:26:28.062 "num_blocks": 38912, 00:26:28.062 "numa_id": -1, 00:26:28.062 "product_name": "NVMe disk", 00:26:28.062 "supported_io_types": { 00:26:28.062 "abort": true, 00:26:28.062 "compare": true, 00:26:28.062 "compare_and_write": true, 00:26:28.062 "copy": true, 00:26:28.062 "flush": true, 00:26:28.062 "get_zone_info": false, 00:26:28.062 "nvme_admin": true, 00:26:28.062 "nvme_io": true, 00:26:28.062 "nvme_io_md": false, 00:26:28.062 "nvme_iov_md": false, 00:26:28.062 "read": true, 00:26:28.062 "reset": true, 00:26:28.062 "seek_data": false, 00:26:28.062 "seek_hole": false, 00:26:28.062 "unmap": true, 00:26:28.062 "write": true, 00:26:28.062 "write_zeroes": true, 00:26:28.062 "zcopy": false, 00:26:28.062 "zone_append": false, 00:26:28.062 "zone_management": false 00:26:28.062 }, 00:26:28.062 "uuid": "2439c5a2-0a75-4477-ae77-97c6899f2cd4", 00:26:28.062 "zoned": false 00:26:28.062 } 00:26:28.062 ] 00:26:28.062 11:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=101948 00:26:28.062 11:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:28.062 11:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:26:28.062 Running I/O for 10 seconds... 00:26:28.997 Latency(us) 00:26:28.997 [2024-11-20T11:39:14.586Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:28.997 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:28.997 Nvme0n1 : 1.00 7058.00 27.57 0.00 0.00 0.00 0.00 0.00 00:26:28.997 [2024-11-20T11:39:14.586Z] =================================================================================================================== 00:26:28.997 [2024-11-20T11:39:14.586Z] Total : 7058.00 27.57 0.00 0.00 0.00 0.00 0.00 00:26:28.997 00:26:29.932 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 833df456-539a-486f-a655-40af418cf47e 00:26:30.190 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:30.190 Nvme0n1 : 2.00 7369.50 28.79 0.00 0.00 0.00 0.00 0.00 00:26:30.190 [2024-11-20T11:39:15.779Z] =================================================================================================================== 00:26:30.190 [2024-11-20T11:39:15.779Z] Total : 7369.50 28.79 0.00 0.00 0.00 0.00 0.00 00:26:30.190 00:26:30.449 true 00:26:30.449 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 833df456-539a-486f-a655-40af418cf47e 00:26:30.449 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:26:30.707 11:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:26:30.707 11:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:26:30.707 11:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 101948 00:26:30.966 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:30.966 Nvme0n1 : 3.00 7487.33 29.25 0.00 0.00 0.00 0.00 0.00 00:26:30.966 [2024-11-20T11:39:16.555Z] =================================================================================================================== 00:26:30.966 [2024-11-20T11:39:16.555Z] Total : 7487.33 29.25 0.00 0.00 0.00 0.00 0.00 00:26:30.966 00:26:32.343 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:32.343 Nvme0n1 : 4.00 7388.50 28.86 0.00 0.00 0.00 0.00 0.00 00:26:32.343 [2024-11-20T11:39:17.932Z] =================================================================================================================== 00:26:32.343 [2024-11-20T11:39:17.932Z] Total : 7388.50 28.86 0.00 0.00 0.00 0.00 0.00 00:26:32.343 00:26:33.280 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:33.280 Nvme0n1 : 5.00 7409.00 28.94 0.00 0.00 0.00 0.00 0.00 00:26:33.280 [2024-11-20T11:39:18.869Z] =================================================================================================================== 00:26:33.280 [2024-11-20T11:39:18.869Z] Total : 7409.00 28.94 0.00 0.00 0.00 0.00 0.00 00:26:33.280 00:26:34.217 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:34.217 Nvme0n1 : 6.00 7392.83 28.88 0.00 0.00 0.00 0.00 0.00 00:26:34.217 [2024-11-20T11:39:19.806Z] =================================================================================================================== 00:26:34.217 [2024-11-20T11:39:19.806Z] Total : 7392.83 28.88 0.00 0.00 0.00 0.00 0.00 00:26:34.217 00:26:35.153 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:35.153 Nvme0n1 : 7.00 7280.43 28.44 0.00 0.00 0.00 0.00 0.00 00:26:35.153 [2024-11-20T11:39:20.742Z] =================================================================================================================== 00:26:35.153 [2024-11-20T11:39:20.742Z] Total : 7280.43 28.44 0.00 0.00 0.00 0.00 0.00 00:26:35.153 00:26:36.133 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:36.133 Nvme0n1 : 8.00 7229.25 28.24 0.00 0.00 0.00 0.00 0.00 00:26:36.133 [2024-11-20T11:39:21.722Z] =================================================================================================================== 00:26:36.133 [2024-11-20T11:39:21.722Z] Total : 7229.25 28.24 0.00 0.00 0.00 0.00 0.00 00:26:36.133 00:26:37.140 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:37.140 Nvme0n1 : 9.00 7233.78 28.26 0.00 0.00 0.00 0.00 0.00 00:26:37.140 [2024-11-20T11:39:22.729Z] =================================================================================================================== 00:26:37.140 [2024-11-20T11:39:22.729Z] Total : 7233.78 28.26 0.00 0.00 0.00 0.00 0.00 00:26:37.140 00:26:38.074 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:38.074 Nvme0n1 : 10.00 7169.40 28.01 0.00 0.00 0.00 0.00 0.00 00:26:38.074 [2024-11-20T11:39:23.663Z] =================================================================================================================== 00:26:38.074 [2024-11-20T11:39:23.663Z] Total : 7169.40 28.01 0.00 0.00 0.00 0.00 0.00 00:26:38.074 00:26:38.074 00:26:38.074 Latency(us) 00:26:38.074 [2024-11-20T11:39:23.663Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:38.074 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:38.074 Nvme0n1 : 10.01 7174.66 28.03 0.00 0.00 17833.31 7745.16 76260.07 00:26:38.074 [2024-11-20T11:39:23.663Z] =================================================================================================================== 00:26:38.074 [2024-11-20T11:39:23.663Z] Total : 7174.66 28.03 0.00 0.00 17833.31 7745.16 76260.07 00:26:38.074 { 00:26:38.074 "results": [ 00:26:38.074 { 00:26:38.074 "job": "Nvme0n1", 00:26:38.074 "core_mask": "0x2", 00:26:38.074 "workload": "randwrite", 00:26:38.074 "status": "finished", 00:26:38.074 "queue_depth": 128, 00:26:38.074 "io_size": 4096, 00:26:38.074 "runtime": 10.010514, 00:26:38.074 "iops": 7174.656566086417, 00:26:38.074 "mibps": 28.026002211275067, 00:26:38.074 "io_failed": 0, 00:26:38.074 "io_timeout": 0, 00:26:38.074 "avg_latency_us": 17833.305510339957, 00:26:38.074 "min_latency_us": 7745.163636363636, 00:26:38.074 "max_latency_us": 76260.07272727272 00:26:38.074 } 00:26:38.074 ], 00:26:38.074 "core_count": 1 00:26:38.074 } 00:26:38.074 11:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 101918 00:26:38.075 11:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 101918 ']' 00:26:38.075 11:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 101918 00:26:38.075 11:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:26:38.075 11:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:38.075 11:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 101918 00:26:38.075 11:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:38.075 killing process with pid 101918 00:26:38.075 11:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:38.075 11:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 101918' 00:26:38.075 Received shutdown signal, test time was about 10.000000 seconds 00:26:38.075 00:26:38.075 Latency(us) 00:26:38.075 [2024-11-20T11:39:23.664Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:38.075 [2024-11-20T11:39:23.664Z] =================================================================================================================== 00:26:38.075 [2024-11-20T11:39:23.664Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:38.075 11:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 101918 00:26:38.075 11:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 101918 00:26:38.333 11:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:26:38.590 11:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:38.848 11:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 833df456-539a-486f-a655-40af418cf47e 00:26:38.848 11:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:26:39.108 11:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:26:39.108 11:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:26:39.108 11:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:26:39.682 [2024-11-20 11:39:24.965226] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:26:39.682 11:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 833df456-539a-486f-a655-40af418cf47e 00:26:39.682 11:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:26:39.682 11:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 833df456-539a-486f-a655-40af418cf47e 00:26:39.682 11:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:39.682 11:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:39.682 11:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:39.682 11:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:39.682 11:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:39.682 11:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:39.682 11:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:39.682 11:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:26:39.682 11:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 833df456-539a-486f-a655-40af418cf47e 00:26:39.941 2024/11/20 11:39:25 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:833df456-539a-486f-a655-40af418cf47e], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:26:39.941 request: 00:26:39.941 { 00:26:39.941 "method": "bdev_lvol_get_lvstores", 00:26:39.941 "params": { 00:26:39.941 "uuid": "833df456-539a-486f-a655-40af418cf47e" 00:26:39.941 } 00:26:39.941 } 00:26:39.941 Got JSON-RPC error response 00:26:39.941 GoRPCClient: error on JSON-RPC call 00:26:39.941 11:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:26:39.941 11:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:39.941 11:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:39.941 11:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:39.941 11:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:26:40.200 aio_bdev 00:26:40.200 11:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 2439c5a2-0a75-4477-ae77-97c6899f2cd4 00:26:40.200 11:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=2439c5a2-0a75-4477-ae77-97c6899f2cd4 00:26:40.200 11:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:26:40.200 11:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:26:40.200 11:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:26:40.200 11:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:26:40.200 11:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:26:40.459 11:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2439c5a2-0a75-4477-ae77-97c6899f2cd4 -t 2000 00:26:40.717 [ 00:26:40.717 { 00:26:40.717 "aliases": [ 00:26:40.717 "lvs/lvol" 00:26:40.717 ], 00:26:40.717 "assigned_rate_limits": { 00:26:40.717 "r_mbytes_per_sec": 0, 00:26:40.717 "rw_ios_per_sec": 0, 00:26:40.717 "rw_mbytes_per_sec": 0, 00:26:40.717 "w_mbytes_per_sec": 0 00:26:40.717 }, 00:26:40.717 "block_size": 4096, 00:26:40.717 "claimed": false, 00:26:40.717 "driver_specific": { 00:26:40.717 "lvol": { 00:26:40.717 "base_bdev": "aio_bdev", 00:26:40.717 "clone": false, 00:26:40.717 "esnap_clone": false, 00:26:40.717 "lvol_store_uuid": "833df456-539a-486f-a655-40af418cf47e", 00:26:40.717 "num_allocated_clusters": 38, 00:26:40.717 "snapshot": false, 00:26:40.717 "thin_provision": false 00:26:40.717 } 00:26:40.717 }, 00:26:40.717 "name": "2439c5a2-0a75-4477-ae77-97c6899f2cd4", 00:26:40.717 "num_blocks": 38912, 00:26:40.717 "product_name": "Logical Volume", 00:26:40.717 "supported_io_types": { 00:26:40.717 "abort": false, 00:26:40.717 "compare": false, 00:26:40.717 "compare_and_write": false, 00:26:40.717 "copy": false, 00:26:40.717 "flush": false, 00:26:40.717 "get_zone_info": false, 00:26:40.717 "nvme_admin": false, 00:26:40.717 "nvme_io": false, 00:26:40.717 "nvme_io_md": false, 00:26:40.717 "nvme_iov_md": false, 00:26:40.717 "read": true, 00:26:40.717 "reset": true, 00:26:40.717 "seek_data": true, 00:26:40.717 "seek_hole": true, 00:26:40.717 "unmap": true, 00:26:40.717 "write": true, 00:26:40.717 "write_zeroes": true, 00:26:40.717 "zcopy": false, 00:26:40.717 "zone_append": false, 00:26:40.717 "zone_management": false 00:26:40.717 }, 00:26:40.717 "uuid": "2439c5a2-0a75-4477-ae77-97c6899f2cd4", 00:26:40.717 "zoned": false 00:26:40.717 } 00:26:40.717 ] 00:26:40.717 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:26:40.717 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:26:40.718 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 833df456-539a-486f-a655-40af418cf47e 00:26:41.285 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:26:41.286 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 833df456-539a-486f-a655-40af418cf47e 00:26:41.286 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:26:41.544 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:26:41.544 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 2439c5a2-0a75-4477-ae77-97c6899f2cd4 00:26:41.802 11:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 833df456-539a-486f-a655-40af418cf47e 00:26:42.061 11:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:26:42.319 11:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:26:42.886 00:26:42.886 real 0m18.786s 00:26:42.886 user 0m18.084s 00:26:42.886 sys 0m2.125s 00:26:42.886 11:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:42.886 11:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:26:42.886 ************************************ 00:26:42.886 END TEST lvs_grow_clean 00:26:42.886 ************************************ 00:26:42.886 11:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:26:42.886 11:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:42.886 11:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:42.886 11:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:26:42.886 ************************************ 00:26:42.886 START TEST lvs_grow_dirty 00:26:42.886 ************************************ 00:26:42.886 11:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:26:42.886 11:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:26:42.886 11:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:26:42.887 11:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:26:42.887 11:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:26:42.887 11:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:26:42.887 11:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:26:42.887 11:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:26:42.887 11:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:26:42.887 11:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:26:43.145 11:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:26:43.145 11:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:26:43.403 11:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=63c5d849-f86b-42af-babb-6ed09bff3ce8 00:26:43.403 11:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 63c5d849-f86b-42af-babb-6ed09bff3ce8 00:26:43.403 11:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:26:43.661 11:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:26:43.661 11:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:26:43.661 11:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 63c5d849-f86b-42af-babb-6ed09bff3ce8 lvol 150 00:26:44.227 11:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=7f058404-293b-492f-a048-9b9d3fc38e96 00:26:44.227 11:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:26:44.227 11:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:26:44.227 [2024-11-20 11:39:29.769154] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:26:44.227 [2024-11-20 11:39:29.769310] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:26:44.227 true 00:26:44.227 11:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 63c5d849-f86b-42af-babb-6ed09bff3ce8 00:26:44.227 11:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:26:44.486 11:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:26:44.486 11:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:26:44.745 11:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7f058404-293b-492f-a048-9b9d3fc38e96 00:26:45.313 11:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:26:45.313 [2024-11-20 11:39:30.897425] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:45.571 11:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:26:45.830 11:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=102349 00:26:45.830 11:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:45.830 11:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 102349 /var/tmp/bdevperf.sock 00:26:45.830 11:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 102349 ']' 00:26:45.830 11:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:45.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:45.830 11:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:45.830 11:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:45.830 11:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:45.830 11:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:26:45.830 11:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:26:45.830 [2024-11-20 11:39:31.341462] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:26:45.830 [2024-11-20 11:39:31.341575] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102349 ] 00:26:46.088 [2024-11-20 11:39:31.490561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:46.088 [2024-11-20 11:39:31.533183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:46.088 11:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:46.088 11:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:26:46.088 11:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:26:46.655 Nvme0n1 00:26:46.655 11:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:26:46.912 [ 00:26:46.912 { 00:26:46.912 "aliases": [ 00:26:46.912 "7f058404-293b-492f-a048-9b9d3fc38e96" 00:26:46.912 ], 00:26:46.912 "assigned_rate_limits": { 00:26:46.912 "r_mbytes_per_sec": 0, 00:26:46.912 "rw_ios_per_sec": 0, 00:26:46.912 "rw_mbytes_per_sec": 0, 00:26:46.912 "w_mbytes_per_sec": 0 00:26:46.912 }, 00:26:46.912 "block_size": 4096, 00:26:46.912 "claimed": false, 00:26:46.912 "driver_specific": { 00:26:46.912 "mp_policy": "active_passive", 00:26:46.912 "nvme": [ 00:26:46.912 { 00:26:46.912 "ctrlr_data": { 00:26:46.912 "ana_reporting": false, 00:26:46.912 "cntlid": 1, 00:26:46.912 "firmware_revision": "25.01", 00:26:46.912 "model_number": "SPDK bdev Controller", 00:26:46.912 "multi_ctrlr": true, 00:26:46.912 "oacs": { 00:26:46.912 "firmware": 0, 00:26:46.912 "format": 0, 00:26:46.912 "ns_manage": 0, 00:26:46.912 "security": 0 00:26:46.912 }, 00:26:46.912 "serial_number": "SPDK0", 00:26:46.912 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:46.912 "vendor_id": "0x8086" 00:26:46.912 }, 00:26:46.912 "ns_data": { 00:26:46.912 "can_share": true, 00:26:46.912 "id": 1 00:26:46.912 }, 00:26:46.912 "trid": { 00:26:46.912 "adrfam": "IPv4", 00:26:46.912 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:46.912 "traddr": "10.0.0.3", 00:26:46.912 "trsvcid": "4420", 00:26:46.912 "trtype": "TCP" 00:26:46.912 }, 00:26:46.912 "vs": { 00:26:46.912 "nvme_version": "1.3" 00:26:46.912 } 00:26:46.912 } 00:26:46.912 ] 00:26:46.912 }, 00:26:46.912 "memory_domains": [ 00:26:46.912 { 00:26:46.912 "dma_device_id": "system", 00:26:46.912 "dma_device_type": 1 00:26:46.912 } 00:26:46.912 ], 00:26:46.912 "name": "Nvme0n1", 00:26:46.912 "num_blocks": 38912, 00:26:46.912 "numa_id": -1, 00:26:46.912 "product_name": "NVMe disk", 00:26:46.912 "supported_io_types": { 00:26:46.912 "abort": true, 00:26:46.912 "compare": true, 00:26:46.912 "compare_and_write": true, 00:26:46.912 "copy": true, 00:26:46.912 "flush": true, 00:26:46.912 "get_zone_info": false, 00:26:46.912 "nvme_admin": true, 00:26:46.912 "nvme_io": true, 00:26:46.912 "nvme_io_md": false, 00:26:46.912 "nvme_iov_md": false, 00:26:46.912 "read": true, 00:26:46.912 "reset": true, 00:26:46.912 "seek_data": false, 00:26:46.912 "seek_hole": false, 00:26:46.912 "unmap": true, 00:26:46.912 "write": true, 00:26:46.913 "write_zeroes": true, 00:26:46.913 "zcopy": false, 00:26:46.913 "zone_append": false, 00:26:46.913 "zone_management": false 00:26:46.913 }, 00:26:46.913 "uuid": "7f058404-293b-492f-a048-9b9d3fc38e96", 00:26:46.913 "zoned": false 00:26:46.913 } 00:26:46.913 ] 00:26:46.913 11:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=102383 00:26:46.913 11:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:46.913 11:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:26:46.913 Running I/O for 10 seconds... 00:26:48.288 Latency(us) 00:26:48.288 [2024-11-20T11:39:33.877Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:48.288 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:48.288 Nvme0n1 : 1.00 7000.00 27.34 0.00 0.00 0.00 0.00 0.00 00:26:48.288 [2024-11-20T11:39:33.877Z] =================================================================================================================== 00:26:48.288 [2024-11-20T11:39:33.877Z] Total : 7000.00 27.34 0.00 0.00 0.00 0.00 0.00 00:26:48.288 00:26:48.860 11:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 63c5d849-f86b-42af-babb-6ed09bff3ce8 00:26:49.120 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:49.120 Nvme0n1 : 2.00 7442.50 29.07 0.00 0.00 0.00 0.00 0.00 00:26:49.120 [2024-11-20T11:39:34.709Z] =================================================================================================================== 00:26:49.120 [2024-11-20T11:39:34.709Z] Total : 7442.50 29.07 0.00 0.00 0.00 0.00 0.00 00:26:49.120 00:26:49.120 true 00:26:49.396 11:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 63c5d849-f86b-42af-babb-6ed09bff3ce8 00:26:49.396 11:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:26:49.660 11:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:26:49.660 11:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:26:49.660 11:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 102383 00:26:49.918 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:49.918 Nvme0n1 : 3.00 7500.00 29.30 0.00 0.00 0.00 0.00 0.00 00:26:49.918 [2024-11-20T11:39:35.507Z] =================================================================================================================== 00:26:49.918 [2024-11-20T11:39:35.507Z] Total : 7500.00 29.30 0.00 0.00 0.00 0.00 0.00 00:26:49.918 00:26:51.293 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:51.293 Nvme0n1 : 4.00 7528.50 29.41 0.00 0.00 0.00 0.00 0.00 00:26:51.293 [2024-11-20T11:39:36.882Z] =================================================================================================================== 00:26:51.293 [2024-11-20T11:39:36.882Z] Total : 7528.50 29.41 0.00 0.00 0.00 0.00 0.00 00:26:51.293 00:26:52.228 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:52.228 Nvme0n1 : 5.00 7541.40 29.46 0.00 0.00 0.00 0.00 0.00 00:26:52.228 [2024-11-20T11:39:37.817Z] =================================================================================================================== 00:26:52.228 [2024-11-20T11:39:37.817Z] Total : 7541.40 29.46 0.00 0.00 0.00 0.00 0.00 00:26:52.228 00:26:53.163 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:53.163 Nvme0n1 : 6.00 7542.67 29.46 0.00 0.00 0.00 0.00 0.00 00:26:53.163 [2024-11-20T11:39:38.752Z] =================================================================================================================== 00:26:53.163 [2024-11-20T11:39:38.752Z] Total : 7542.67 29.46 0.00 0.00 0.00 0.00 0.00 00:26:53.163 00:26:54.097 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:54.097 Nvme0n1 : 7.00 7418.57 28.98 0.00 0.00 0.00 0.00 0.00 00:26:54.097 [2024-11-20T11:39:39.686Z] =================================================================================================================== 00:26:54.097 [2024-11-20T11:39:39.686Z] Total : 7418.57 28.98 0.00 0.00 0.00 0.00 0.00 00:26:54.097 00:26:55.032 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:55.032 Nvme0n1 : 8.00 7385.38 28.85 0.00 0.00 0.00 0.00 0.00 00:26:55.032 [2024-11-20T11:39:40.621Z] =================================================================================================================== 00:26:55.032 [2024-11-20T11:39:40.621Z] Total : 7385.38 28.85 0.00 0.00 0.00 0.00 0.00 00:26:55.032 00:26:55.968 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:55.968 Nvme0n1 : 9.00 7348.44 28.70 0.00 0.00 0.00 0.00 0.00 00:26:55.968 [2024-11-20T11:39:41.557Z] =================================================================================================================== 00:26:55.968 [2024-11-20T11:39:41.557Z] Total : 7348.44 28.70 0.00 0.00 0.00 0.00 0.00 00:26:55.968 00:26:56.904 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:56.904 Nvme0n1 : 10.00 7332.30 28.64 0.00 0.00 0.00 0.00 0.00 00:26:56.904 [2024-11-20T11:39:42.493Z] =================================================================================================================== 00:26:56.904 [2024-11-20T11:39:42.493Z] Total : 7332.30 28.64 0.00 0.00 0.00 0.00 0.00 00:26:56.904 00:26:56.904 00:26:56.904 Latency(us) 00:26:56.904 [2024-11-20T11:39:42.493Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:56.904 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:56.904 Nvme0n1 : 10.02 7333.79 28.65 0.00 0.00 17440.79 5868.45 163959.16 00:26:56.904 [2024-11-20T11:39:42.493Z] =================================================================================================================== 00:26:56.904 [2024-11-20T11:39:42.493Z] Total : 7333.79 28.65 0.00 0.00 17440.79 5868.45 163959.16 00:26:56.904 { 00:26:56.904 "results": [ 00:26:56.904 { 00:26:56.904 "job": "Nvme0n1", 00:26:56.904 "core_mask": "0x2", 00:26:56.904 "workload": "randwrite", 00:26:56.904 "status": "finished", 00:26:56.904 "queue_depth": 128, 00:26:56.904 "io_size": 4096, 00:26:56.904 "runtime": 10.015425, 00:26:56.904 "iops": 7333.78763257675, 00:26:56.904 "mibps": 28.64760793975293, 00:26:56.904 "io_failed": 0, 00:26:56.904 "io_timeout": 0, 00:26:56.904 "avg_latency_us": 17440.785145223595, 00:26:56.904 "min_latency_us": 5868.450909090909, 00:26:56.904 "max_latency_us": 163959.15636363637 00:26:56.904 } 00:26:56.904 ], 00:26:56.904 "core_count": 1 00:26:56.904 } 00:26:57.163 11:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 102349 00:26:57.163 11:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 102349 ']' 00:26:57.163 11:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 102349 00:26:57.164 11:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:26:57.164 11:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:57.164 11:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 102349 00:26:57.164 killing process with pid 102349 00:26:57.164 Received shutdown signal, test time was about 10.000000 seconds 00:26:57.164 00:26:57.164 Latency(us) 00:26:57.164 [2024-11-20T11:39:42.753Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:57.164 [2024-11-20T11:39:42.753Z] =================================================================================================================== 00:26:57.164 [2024-11-20T11:39:42.753Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:57.164 11:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:57.164 11:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:57.164 11:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 102349' 00:26:57.164 11:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 102349 00:26:57.164 11:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 102349 00:26:57.164 11:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:26:57.422 11:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:57.681 11:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 63c5d849-f86b-42af-babb-6ed09bff3ce8 00:26:57.681 11:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:26:57.939 11:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:26:57.939 11:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:26:57.939 11:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 101771 00:26:57.939 11:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 101771 00:26:58.198 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 101771 Killed "${NVMF_APP[@]}" "$@" 00:26:58.198 11:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:26:58.198 11:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:26:58.198 11:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:58.198 11:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:58.198 11:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:26:58.198 11:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=102538 00:26:58.198 11:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:26:58.198 11:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 102538 00:26:58.198 11:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 102538 ']' 00:26:58.198 11:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:58.198 11:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:58.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:58.198 11:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:58.198 11:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:58.198 11:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:26:58.198 [2024-11-20 11:39:43.623799] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:26:58.198 [2024-11-20 11:39:43.626368] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:26:58.198 [2024-11-20 11:39:43.626453] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:58.198 [2024-11-20 11:39:43.780806] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:58.457 [2024-11-20 11:39:43.818769] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:58.457 [2024-11-20 11:39:43.819064] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:58.457 [2024-11-20 11:39:43.819089] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:58.457 [2024-11-20 11:39:43.819100] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:58.457 [2024-11-20 11:39:43.819108] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:58.457 [2024-11-20 11:39:43.819440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:58.457 [2024-11-20 11:39:43.874332] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:58.457 [2024-11-20 11:39:43.874947] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:26:59.393 11:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:59.393 11:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:26:59.393 11:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:59.393 11:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:59.393 11:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:26:59.393 11:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:59.393 11:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:26:59.393 [2024-11-20 11:39:44.946252] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:26:59.393 [2024-11-20 11:39:44.947047] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:26:59.393 [2024-11-20 11:39:44.947553] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:26:59.652 11:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:26:59.652 11:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 7f058404-293b-492f-a048-9b9d3fc38e96 00:26:59.652 11:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=7f058404-293b-492f-a048-9b9d3fc38e96 00:26:59.652 11:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:26:59.652 11:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:26:59.652 11:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:26:59.652 11:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:26:59.652 11:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:26:59.911 11:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7f058404-293b-492f-a048-9b9d3fc38e96 -t 2000 00:27:00.169 [ 00:27:00.169 { 00:27:00.169 "aliases": [ 00:27:00.169 "lvs/lvol" 00:27:00.169 ], 00:27:00.169 "assigned_rate_limits": { 00:27:00.169 "r_mbytes_per_sec": 0, 00:27:00.169 "rw_ios_per_sec": 0, 00:27:00.169 "rw_mbytes_per_sec": 0, 00:27:00.169 "w_mbytes_per_sec": 0 00:27:00.169 }, 00:27:00.169 "block_size": 4096, 00:27:00.169 "claimed": false, 00:27:00.169 "driver_specific": { 00:27:00.169 "lvol": { 00:27:00.169 "base_bdev": "aio_bdev", 00:27:00.169 "clone": false, 00:27:00.169 "esnap_clone": false, 00:27:00.169 "lvol_store_uuid": "63c5d849-f86b-42af-babb-6ed09bff3ce8", 00:27:00.169 "num_allocated_clusters": 38, 00:27:00.169 "snapshot": false, 00:27:00.169 "thin_provision": false 00:27:00.169 } 00:27:00.169 }, 00:27:00.169 "name": "7f058404-293b-492f-a048-9b9d3fc38e96", 00:27:00.169 "num_blocks": 38912, 00:27:00.169 "product_name": "Logical Volume", 00:27:00.169 "supported_io_types": { 00:27:00.169 "abort": false, 00:27:00.169 "compare": false, 00:27:00.169 "compare_and_write": false, 00:27:00.169 "copy": false, 00:27:00.169 "flush": false, 00:27:00.169 "get_zone_info": false, 00:27:00.169 "nvme_admin": false, 00:27:00.169 "nvme_io": false, 00:27:00.169 "nvme_io_md": false, 00:27:00.169 "nvme_iov_md": false, 00:27:00.169 "read": true, 00:27:00.169 "reset": true, 00:27:00.169 "seek_data": true, 00:27:00.169 "seek_hole": true, 00:27:00.169 "unmap": true, 00:27:00.169 "write": true, 00:27:00.169 "write_zeroes": true, 00:27:00.169 "zcopy": false, 00:27:00.169 "zone_append": false, 00:27:00.169 "zone_management": false 00:27:00.169 }, 00:27:00.169 "uuid": "7f058404-293b-492f-a048-9b9d3fc38e96", 00:27:00.169 "zoned": false 00:27:00.169 } 00:27:00.169 ] 00:27:00.169 11:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:27:00.169 11:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 63c5d849-f86b-42af-babb-6ed09bff3ce8 00:27:00.169 11:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:27:00.428 11:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:27:00.428 11:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 63c5d849-f86b-42af-babb-6ed09bff3ce8 00:27:00.428 11:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:27:00.687 11:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:27:00.687 11:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:27:00.944 [2024-11-20 11:39:46.528132] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:27:01.203 11:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 63c5d849-f86b-42af-babb-6ed09bff3ce8 00:27:01.203 11:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:27:01.203 11:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 63c5d849-f86b-42af-babb-6ed09bff3ce8 00:27:01.203 11:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:01.203 11:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:01.203 11:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:01.203 11:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:01.203 11:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:01.203 11:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:01.203 11:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:01.203 11:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:27:01.203 11:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 63c5d849-f86b-42af-babb-6ed09bff3ce8 00:27:01.461 2024/11/20 11:39:46 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:63c5d849-f86b-42af-babb-6ed09bff3ce8], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:27:01.461 request: 00:27:01.461 { 00:27:01.461 "method": "bdev_lvol_get_lvstores", 00:27:01.461 "params": { 00:27:01.461 "uuid": "63c5d849-f86b-42af-babb-6ed09bff3ce8" 00:27:01.461 } 00:27:01.461 } 00:27:01.461 Got JSON-RPC error response 00:27:01.461 GoRPCClient: error on JSON-RPC call 00:27:01.461 11:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:27:01.461 11:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:01.461 11:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:01.461 11:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:01.461 11:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:27:01.721 aio_bdev 00:27:01.721 11:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 7f058404-293b-492f-a048-9b9d3fc38e96 00:27:01.721 11:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=7f058404-293b-492f-a048-9b9d3fc38e96 00:27:01.721 11:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:27:01.721 11:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:27:01.721 11:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:27:01.721 11:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:27:01.721 11:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:27:01.980 11:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7f058404-293b-492f-a048-9b9d3fc38e96 -t 2000 00:27:02.238 [ 00:27:02.238 { 00:27:02.238 "aliases": [ 00:27:02.238 "lvs/lvol" 00:27:02.238 ], 00:27:02.238 "assigned_rate_limits": { 00:27:02.238 "r_mbytes_per_sec": 0, 00:27:02.238 "rw_ios_per_sec": 0, 00:27:02.238 "rw_mbytes_per_sec": 0, 00:27:02.238 "w_mbytes_per_sec": 0 00:27:02.238 }, 00:27:02.238 "block_size": 4096, 00:27:02.238 "claimed": false, 00:27:02.238 "driver_specific": { 00:27:02.238 "lvol": { 00:27:02.238 "base_bdev": "aio_bdev", 00:27:02.238 "clone": false, 00:27:02.238 "esnap_clone": false, 00:27:02.238 "lvol_store_uuid": "63c5d849-f86b-42af-babb-6ed09bff3ce8", 00:27:02.238 "num_allocated_clusters": 38, 00:27:02.238 "snapshot": false, 00:27:02.238 "thin_provision": false 00:27:02.238 } 00:27:02.238 }, 00:27:02.238 "name": "7f058404-293b-492f-a048-9b9d3fc38e96", 00:27:02.238 "num_blocks": 38912, 00:27:02.238 "product_name": "Logical Volume", 00:27:02.238 "supported_io_types": { 00:27:02.238 "abort": false, 00:27:02.238 "compare": false, 00:27:02.239 "compare_and_write": false, 00:27:02.239 "copy": false, 00:27:02.239 "flush": false, 00:27:02.239 "get_zone_info": false, 00:27:02.239 "nvme_admin": false, 00:27:02.239 "nvme_io": false, 00:27:02.239 "nvme_io_md": false, 00:27:02.239 "nvme_iov_md": false, 00:27:02.239 "read": true, 00:27:02.239 "reset": true, 00:27:02.239 "seek_data": true, 00:27:02.239 "seek_hole": true, 00:27:02.239 "unmap": true, 00:27:02.239 "write": true, 00:27:02.239 "write_zeroes": true, 00:27:02.239 "zcopy": false, 00:27:02.239 "zone_append": false, 00:27:02.239 "zone_management": false 00:27:02.239 }, 00:27:02.239 "uuid": "7f058404-293b-492f-a048-9b9d3fc38e96", 00:27:02.239 "zoned": false 00:27:02.239 } 00:27:02.239 ] 00:27:02.239 11:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:27:02.239 11:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 63c5d849-f86b-42af-babb-6ed09bff3ce8 00:27:02.239 11:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:27:02.824 11:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:27:02.824 11:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 63c5d849-f86b-42af-babb-6ed09bff3ce8 00:27:02.824 11:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:27:03.082 11:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:27:03.082 11:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 7f058404-293b-492f-a048-9b9d3fc38e96 00:27:03.342 11:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 63c5d849-f86b-42af-babb-6ed09bff3ce8 00:27:03.601 11:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:27:03.858 11:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:27:04.424 00:27:04.424 real 0m21.591s 00:27:04.424 user 0m29.308s 00:27:04.424 sys 0m7.798s 00:27:04.424 ************************************ 00:27:04.424 END TEST lvs_grow_dirty 00:27:04.424 ************************************ 00:27:04.424 11:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:04.424 11:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:27:04.424 11:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:27:04.424 11:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:27:04.424 11:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:27:04.424 11:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:27:04.424 11:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:27:04.424 11:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:27:04.424 11:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:27:04.424 11:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:27:04.424 11:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:27:04.424 nvmf_trace.0 00:27:04.424 11:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:27:04.424 11:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:27:04.424 11:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:04.424 11:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:27:04.992 11:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:04.992 11:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:27:04.992 11:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:04.992 11:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:04.992 rmmod nvme_tcp 00:27:04.992 rmmod nvme_fabrics 00:27:04.992 rmmod nvme_keyring 00:27:04.992 11:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:04.992 11:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:27:04.992 11:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:27:04.992 11:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 102538 ']' 00:27:04.992 11:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 102538 00:27:04.992 11:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 102538 ']' 00:27:04.992 11:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 102538 00:27:04.992 11:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:27:04.992 11:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:04.992 11:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 102538 00:27:05.249 11:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:05.249 11:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:05.249 killing process with pid 102538 00:27:05.249 11:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 102538' 00:27:05.249 11:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 102538 00:27:05.250 11:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 102538 00:27:05.250 11:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:05.250 11:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:05.250 11:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:05.250 11:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:27:05.250 11:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:27:05.250 11:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:27:05.250 11:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:05.250 11:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:05.250 11:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:27:05.250 11:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:27:05.250 11:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:27:05.250 11:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:27:05.250 11:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:27:05.250 11:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:27:05.250 11:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:27:05.250 11:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:27:05.250 11:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:27:05.250 11:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:27:05.508 11:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:27:05.508 11:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:27:05.508 11:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:05.508 11:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:05.508 11:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:27:05.508 11:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:05.508 11:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:05.508 11:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:05.508 11:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:27:05.508 00:27:05.508 real 0m42.851s 00:27:05.508 user 0m48.648s 00:27:05.508 sys 0m11.049s 00:27:05.508 11:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:05.508 ************************************ 00:27:05.508 END TEST nvmf_lvs_grow 00:27:05.508 ************************************ 00:27:05.508 11:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:27:05.508 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:27:05.508 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:05.508 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:05.508 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:05.508 ************************************ 00:27:05.508 START TEST nvmf_bdev_io_wait 00:27:05.508 ************************************ 00:27:05.508 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:27:05.768 * Looking for test storage... 00:27:05.768 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:27:05.768 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:05.768 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:27:05.768 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:05.768 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:05.768 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:05.768 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:05.768 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:05.768 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:27:05.768 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:27:05.768 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:27:05.768 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:27:05.768 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:27:05.768 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:27:05.768 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:27:05.768 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:05.768 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:27:05.768 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:27:05.768 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:05.768 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:05.768 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:27:05.768 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:27:05.768 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:05.768 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:27:05.768 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:27:05.768 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:27:05.768 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:27:05.768 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:05.768 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:27:05.768 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:27:05.768 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:05.768 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:05.768 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:27:05.768 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:05.768 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:05.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:05.768 --rc genhtml_branch_coverage=1 00:27:05.768 --rc genhtml_function_coverage=1 00:27:05.768 --rc genhtml_legend=1 00:27:05.768 --rc geninfo_all_blocks=1 00:27:05.768 --rc geninfo_unexecuted_blocks=1 00:27:05.768 00:27:05.768 ' 00:27:05.768 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:05.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:05.768 --rc genhtml_branch_coverage=1 00:27:05.768 --rc genhtml_function_coverage=1 00:27:05.768 --rc genhtml_legend=1 00:27:05.768 --rc geninfo_all_blocks=1 00:27:05.768 --rc geninfo_unexecuted_blocks=1 00:27:05.768 00:27:05.768 ' 00:27:05.768 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:05.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:05.768 --rc genhtml_branch_coverage=1 00:27:05.768 --rc genhtml_function_coverage=1 00:27:05.768 --rc genhtml_legend=1 00:27:05.768 --rc geninfo_all_blocks=1 00:27:05.768 --rc geninfo_unexecuted_blocks=1 00:27:05.768 00:27:05.768 ' 00:27:05.768 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:05.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:05.768 --rc genhtml_branch_coverage=1 00:27:05.768 --rc genhtml_function_coverage=1 00:27:05.768 --rc genhtml_legend=1 00:27:05.768 --rc geninfo_all_blocks=1 00:27:05.768 --rc geninfo_unexecuted_blocks=1 00:27:05.768 00:27:05.768 ' 00:27:05.768 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:05.768 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:27:05.768 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:05.768 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:05.768 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:05.768 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:05.768 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:05.768 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:05.768 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:05.768 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:05.768 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:05.768 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:05.768 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:27:05.769 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=806d442c-08ba-4719-b76d-33169283459a 00:27:05.769 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:05.769 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:05.769 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:05.769 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:05.769 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:05.769 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:27:05.769 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:05.769 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:05.769 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:05.769 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.769 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.769 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.769 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:27:05.769 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.769 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:27:05.769 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:05.769 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:05.769 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:05.769 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:05.769 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:05.769 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:05.769 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:05.769 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:05.769 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:05.769 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:05.769 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:05.769 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:05.769 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:27:05.769 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:05.769 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:05.769 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:05.769 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:05.769 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:05.769 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:05.769 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:05.769 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:05.769 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:27:05.769 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:27:05.769 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:27:05.769 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:27:05.769 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:27:05.769 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:27:05.769 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:05.769 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:27:05.769 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:27:05.769 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:27:05.769 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:05.769 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:27:05.769 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:05.769 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:27:05.769 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:05.769 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:27:05.769 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:05.769 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:05.769 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:05.769 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:05.769 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:05.769 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:05.769 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:27:05.769 Cannot find device "nvmf_init_br" 00:27:05.769 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:27:05.769 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:27:05.769 Cannot find device "nvmf_init_br2" 00:27:05.769 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:27:05.769 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:27:05.769 Cannot find device "nvmf_tgt_br" 00:27:05.769 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:27:05.769 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:27:05.769 Cannot find device "nvmf_tgt_br2" 00:27:05.769 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:27:05.769 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:27:05.769 Cannot find device "nvmf_init_br" 00:27:05.769 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:27:05.770 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:27:05.770 Cannot find device "nvmf_init_br2" 00:27:05.770 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:27:05.770 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:27:05.770 Cannot find device "nvmf_tgt_br" 00:27:05.770 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:27:05.770 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:27:05.770 Cannot find device "nvmf_tgt_br2" 00:27:05.770 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:27:05.770 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:27:05.770 Cannot find device "nvmf_br" 00:27:05.770 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:27:05.770 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:27:06.028 Cannot find device "nvmf_init_if" 00:27:06.028 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:27:06.028 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:27:06.028 Cannot find device "nvmf_init_if2" 00:27:06.028 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:27:06.028 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:06.028 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:06.028 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:27:06.028 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:06.028 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:06.028 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:27:06.028 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:27:06.028 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:06.028 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:27:06.028 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:06.028 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:06.028 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:06.028 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:06.028 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:06.028 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:27:06.028 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:27:06.028 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:27:06.028 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:27:06.028 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:27:06.029 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:27:06.029 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:27:06.029 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:27:06.029 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:27:06.029 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:06.029 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:06.029 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:06.029 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:27:06.029 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:27:06.029 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:27:06.029 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:27:06.029 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:06.029 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:06.029 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:06.029 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:27:06.029 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:27:06.029 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:27:06.288 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:06.288 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:27:06.288 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:27:06.288 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:06.288 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.097 ms 00:27:06.288 00:27:06.288 --- 10.0.0.3 ping statistics --- 00:27:06.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:06.288 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:27:06.288 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:27:06.288 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:27:06.288 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.064 ms 00:27:06.288 00:27:06.288 --- 10.0.0.4 ping statistics --- 00:27:06.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:06.288 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:27:06.288 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:06.288 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:06.288 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:27:06.288 00:27:06.288 --- 10.0.0.1 ping statistics --- 00:27:06.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:06.288 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:27:06.288 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:27:06.288 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:06.288 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:27:06.288 00:27:06.288 --- 10.0.0.2 ping statistics --- 00:27:06.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:06.288 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:27:06.288 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:06.288 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:27:06.288 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:06.288 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:06.288 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:06.288 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:06.288 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:06.288 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:06.288 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:06.288 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:06.288 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:06.288 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:06.288 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:27:06.288 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=103014 00:27:06.288 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:27:06.288 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 103014 00:27:06.288 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 103014 ']' 00:27:06.288 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:06.288 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:06.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:06.288 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:06.288 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:06.288 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:27:06.288 [2024-11-20 11:39:51.737302] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:06.288 [2024-11-20 11:39:51.738567] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:27:06.288 [2024-11-20 11:39:51.738651] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:06.547 [2024-11-20 11:39:51.890676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:06.547 [2024-11-20 11:39:51.931905] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:06.547 [2024-11-20 11:39:51.932147] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:06.547 [2024-11-20 11:39:51.932311] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:06.547 [2024-11-20 11:39:51.932457] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:06.547 [2024-11-20 11:39:51.932510] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:06.547 [2024-11-20 11:39:51.933437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:06.547 [2024-11-20 11:39:51.933809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:06.547 [2024-11-20 11:39:51.933814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:06.547 [2024-11-20 11:39:51.933551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:06.548 [2024-11-20 11:39:51.934366] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:06.548 11:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:06.548 11:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:27:06.548 11:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:06.548 11:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:06.548 11:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:27:06.548 11:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:06.548 11:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:27:06.548 11:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.548 11:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:27:06.548 11:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.548 11:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:27:06.548 11:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.548 11:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:27:06.548 [2024-11-20 11:39:52.096743] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:06.548 [2024-11-20 11:39:52.097207] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:06.548 [2024-11-20 11:39:52.097504] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:27:06.548 [2024-11-20 11:39:52.097545] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:06.548 11:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.548 11:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:06.548 11:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.548 11:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:27:06.548 [2024-11-20 11:39:52.110790] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:06.548 11:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.548 11:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:06.548 11:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.548 11:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:27:06.807 Malloc0 00:27:06.807 11:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.807 11:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:06.807 11:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.807 11:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:27:06.807 11:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.807 11:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:06.807 11:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.807 11:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:27:06.807 11:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.807 11:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:27:06.807 11:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.807 11:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:27:06.807 [2024-11-20 11:39:52.166838] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:06.807 11:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.807 11:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=103053 00:27:06.807 11:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=103055 00:27:06.807 11:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:27:06.807 11:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:27:06.807 11:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:27:06.807 11:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:27:06.807 11:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=103057 00:27:06.808 11:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:06.808 11:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:06.808 { 00:27:06.808 "params": { 00:27:06.808 "name": "Nvme$subsystem", 00:27:06.808 "trtype": "$TEST_TRANSPORT", 00:27:06.808 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:06.808 "adrfam": "ipv4", 00:27:06.808 "trsvcid": "$NVMF_PORT", 00:27:06.808 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:06.808 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:06.808 "hdgst": ${hdgst:-false}, 00:27:06.808 "ddgst": ${ddgst:-false} 00:27:06.808 }, 00:27:06.808 "method": "bdev_nvme_attach_controller" 00:27:06.808 } 00:27:06.808 EOF 00:27:06.808 )") 00:27:06.808 11:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:27:06.808 11:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:27:06.808 11:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:27:06.808 11:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=103059 00:27:06.808 11:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:27:06.808 11:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:27:06.808 11:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:27:06.808 11:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:06.808 11:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:06.808 { 00:27:06.808 "params": { 00:27:06.808 "name": "Nvme$subsystem", 00:27:06.808 "trtype": "$TEST_TRANSPORT", 00:27:06.808 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:06.808 "adrfam": "ipv4", 00:27:06.808 "trsvcid": "$NVMF_PORT", 00:27:06.808 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:06.808 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:06.808 "hdgst": ${hdgst:-false}, 00:27:06.808 "ddgst": ${ddgst:-false} 00:27:06.808 }, 00:27:06.808 "method": "bdev_nvme_attach_controller" 00:27:06.808 } 00:27:06.808 EOF 00:27:06.808 )") 00:27:06.808 11:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:27:06.808 11:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:27:06.808 11:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:27:06.808 11:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:27:06.808 11:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:27:06.808 11:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:06.808 11:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:06.808 { 00:27:06.808 "params": { 00:27:06.808 "name": "Nvme$subsystem", 00:27:06.808 "trtype": "$TEST_TRANSPORT", 00:27:06.808 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:06.808 "adrfam": "ipv4", 00:27:06.808 "trsvcid": "$NVMF_PORT", 00:27:06.808 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:06.808 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:06.808 "hdgst": ${hdgst:-false}, 00:27:06.808 "ddgst": ${ddgst:-false} 00:27:06.808 }, 00:27:06.808 "method": "bdev_nvme_attach_controller" 00:27:06.808 } 00:27:06.808 EOF 00:27:06.808 )") 00:27:06.808 11:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:27:06.808 11:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:27:06.808 11:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:27:06.808 11:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:06.808 11:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:06.808 { 00:27:06.808 "params": { 00:27:06.808 "name": "Nvme$subsystem", 00:27:06.808 "trtype": "$TEST_TRANSPORT", 00:27:06.808 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:06.808 "adrfam": "ipv4", 00:27:06.808 "trsvcid": "$NVMF_PORT", 00:27:06.808 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:06.808 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:06.808 "hdgst": ${hdgst:-false}, 00:27:06.808 "ddgst": ${ddgst:-false} 00:27:06.808 }, 00:27:06.808 "method": "bdev_nvme_attach_controller" 00:27:06.808 } 00:27:06.808 EOF 00:27:06.808 )") 00:27:06.808 11:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:27:06.808 11:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:27:06.808 11:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:27:06.808 11:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:27:06.808 11:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:06.808 "params": { 00:27:06.808 "name": "Nvme1", 00:27:06.808 "trtype": "tcp", 00:27:06.808 "traddr": "10.0.0.3", 00:27:06.808 "adrfam": "ipv4", 00:27:06.808 "trsvcid": "4420", 00:27:06.808 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:06.808 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:06.808 "hdgst": false, 00:27:06.808 "ddgst": false 00:27:06.808 }, 00:27:06.808 "method": "bdev_nvme_attach_controller" 00:27:06.808 }' 00:27:06.808 11:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:27:06.808 11:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:27:06.808 11:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:27:06.808 11:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:06.808 "params": { 00:27:06.808 "name": "Nvme1", 00:27:06.808 "trtype": "tcp", 00:27:06.808 "traddr": "10.0.0.3", 00:27:06.808 "adrfam": "ipv4", 00:27:06.808 "trsvcid": "4420", 00:27:06.808 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:06.808 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:06.808 "hdgst": false, 00:27:06.808 "ddgst": false 00:27:06.808 }, 00:27:06.808 "method": "bdev_nvme_attach_controller" 00:27:06.808 }' 00:27:06.808 11:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:27:06.809 11:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:27:06.809 11:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:06.809 "params": { 00:27:06.809 "name": "Nvme1", 00:27:06.809 "trtype": "tcp", 00:27:06.809 "traddr": "10.0.0.3", 00:27:06.809 "adrfam": "ipv4", 00:27:06.809 "trsvcid": "4420", 00:27:06.809 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:06.809 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:06.809 "hdgst": false, 00:27:06.809 "ddgst": false 00:27:06.809 }, 00:27:06.809 "method": "bdev_nvme_attach_controller" 00:27:06.809 }' 00:27:06.809 11:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:27:06.809 11:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:27:06.809 11:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:06.809 "params": { 00:27:06.809 "name": "Nvme1", 00:27:06.809 "trtype": "tcp", 00:27:06.809 "traddr": "10.0.0.3", 00:27:06.809 "adrfam": "ipv4", 00:27:06.809 "trsvcid": "4420", 00:27:06.809 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:06.809 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:06.809 "hdgst": false, 00:27:06.809 "ddgst": false 00:27:06.809 }, 00:27:06.809 "method": "bdev_nvme_attach_controller" 00:27:06.809 }' 00:27:06.809 [2024-11-20 11:39:52.230797] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:27:06.809 [2024-11-20 11:39:52.230891] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:27:06.809 11:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 103053 00:27:06.809 [2024-11-20 11:39:52.252220] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:27:06.809 [2024-11-20 11:39:52.252307] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:27:06.809 [2024-11-20 11:39:52.254518] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:27:06.809 [2024-11-20 11:39:52.254668] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:27:06.809 [2024-11-20 11:39:52.255124] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:27:06.809 [2024-11-20 11:39:52.255196] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:27:07.067 [2024-11-20 11:39:52.427179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:07.067 [2024-11-20 11:39:52.458957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:07.067 [2024-11-20 11:39:52.464149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:07.067 [2024-11-20 11:39:52.496880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:27:07.067 [2024-11-20 11:39:52.508073] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:07.067 [2024-11-20 11:39:52.540135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:27:07.067 [2024-11-20 11:39:52.555973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:07.067 Running I/O for 1 seconds... 00:27:07.067 [2024-11-20 11:39:52.588076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:27:07.067 Running I/O for 1 seconds... 00:27:07.067 Running I/O for 1 seconds... 00:27:07.325 Running I/O for 1 seconds... 00:27:08.261 5329.00 IOPS, 20.82 MiB/s [2024-11-20T11:39:53.850Z] 163544.00 IOPS, 638.84 MiB/s 00:27:08.261 Latency(us) 00:27:08.261 [2024-11-20T11:39:53.850Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:08.261 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:27:08.261 Nvme1n1 : 1.04 5235.64 20.45 0.00 0.00 23983.66 4915.20 50522.30 00:27:08.261 [2024-11-20T11:39:53.850Z] =================================================================================================================== 00:27:08.261 [2024-11-20T11:39:53.850Z] Total : 5235.64 20.45 0.00 0.00 23983.66 4915.20 50522.30 00:27:08.261 00:27:08.261 Latency(us) 00:27:08.261 [2024-11-20T11:39:53.850Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:08.261 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:27:08.261 Nvme1n1 : 1.00 163214.80 637.56 0.00 0.00 779.84 309.06 1995.87 00:27:08.261 [2024-11-20T11:39:53.850Z] =================================================================================================================== 00:27:08.261 [2024-11-20T11:39:53.850Z] Total : 163214.80 637.56 0.00 0.00 779.84 309.06 1995.87 00:27:08.261 9384.00 IOPS, 36.66 MiB/s 00:27:08.261 Latency(us) 00:27:08.261 [2024-11-20T11:39:53.850Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:08.261 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:27:08.261 Nvme1n1 : 1.01 9461.35 36.96 0.00 0.00 13475.08 2770.39 19065.02 00:27:08.261 [2024-11-20T11:39:53.850Z] =================================================================================================================== 00:27:08.261 [2024-11-20T11:39:53.850Z] Total : 9461.35 36.96 0.00 0.00 13475.08 2770.39 19065.02 00:27:08.261 4906.00 IOPS, 19.16 MiB/s 00:27:08.261 Latency(us) 00:27:08.261 [2024-11-20T11:39:53.850Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:08.261 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:27:08.261 Nvme1n1 : 1.03 4877.14 19.05 0.00 0.00 25854.43 10128.29 47424.23 00:27:08.261 [2024-11-20T11:39:53.850Z] =================================================================================================================== 00:27:08.261 [2024-11-20T11:39:53.850Z] Total : 4877.14 19.05 0.00 0.00 25854.43 10128.29 47424.23 00:27:08.261 11:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 103055 00:27:08.520 11:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 103057 00:27:08.520 11:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 103059 00:27:08.520 11:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:08.520 11:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.520 11:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:27:08.520 11:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.520 11:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:27:08.520 11:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:27:08.520 11:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:08.520 11:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:27:08.520 11:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:08.520 11:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:27:08.520 11:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:08.520 11:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:08.520 rmmod nvme_tcp 00:27:08.520 rmmod nvme_fabrics 00:27:08.520 rmmod nvme_keyring 00:27:08.520 11:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:08.520 11:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:27:08.520 11:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:27:08.520 11:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 103014 ']' 00:27:08.520 11:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 103014 00:27:08.520 11:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 103014 ']' 00:27:08.520 11:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 103014 00:27:08.520 11:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:27:08.520 11:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:08.520 11:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 103014 00:27:08.520 11:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:08.520 11:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:08.520 killing process with pid 103014 00:27:08.520 11:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 103014' 00:27:08.520 11:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 103014 00:27:08.520 11:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 103014 00:27:08.779 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:08.779 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:08.779 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:08.779 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:27:08.779 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:27:08.779 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:08.779 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:27:08.779 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:08.779 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:27:08.779 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:27:08.779 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:27:08.779 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:27:08.779 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:27:08.779 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:27:08.779 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:27:08.779 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:27:08.779 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:27:08.779 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:27:08.779 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:27:08.779 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:27:08.779 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:08.779 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:08.779 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:27:08.779 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:08.779 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:08.779 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:09.042 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:27:09.042 00:27:09.042 real 0m3.366s 00:27:09.042 user 0m11.829s 00:27:09.042 sys 0m2.311s 00:27:09.042 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:09.042 ************************************ 00:27:09.042 END TEST nvmf_bdev_io_wait 00:27:09.042 ************************************ 00:27:09.042 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:27:09.042 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:27:09.042 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:09.042 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:09.042 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:09.042 ************************************ 00:27:09.042 START TEST nvmf_queue_depth 00:27:09.042 ************************************ 00:27:09.042 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:27:09.042 * Looking for test storage... 00:27:09.042 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:27:09.042 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:09.042 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:09.042 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:27:09.042 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:09.042 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:09.042 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:09.042 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:09.042 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:27:09.042 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:27:09.042 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:27:09.042 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:27:09.042 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:27:09.042 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:27:09.042 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:27:09.042 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:09.042 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:27:09.042 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:27:09.042 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:09.042 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:09.042 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:27:09.042 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:27:09.042 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:09.042 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:27:09.042 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:27:09.042 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:27:09.042 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:27:09.042 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:09.042 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:27:09.042 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:27:09.042 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:09.042 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:09.043 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:27:09.043 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:09.043 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:09.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:09.043 --rc genhtml_branch_coverage=1 00:27:09.043 --rc genhtml_function_coverage=1 00:27:09.043 --rc genhtml_legend=1 00:27:09.043 --rc geninfo_all_blocks=1 00:27:09.043 --rc geninfo_unexecuted_blocks=1 00:27:09.043 00:27:09.043 ' 00:27:09.043 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:09.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:09.043 --rc genhtml_branch_coverage=1 00:27:09.043 --rc genhtml_function_coverage=1 00:27:09.043 --rc genhtml_legend=1 00:27:09.043 --rc geninfo_all_blocks=1 00:27:09.043 --rc geninfo_unexecuted_blocks=1 00:27:09.043 00:27:09.043 ' 00:27:09.043 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:09.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:09.043 --rc genhtml_branch_coverage=1 00:27:09.043 --rc genhtml_function_coverage=1 00:27:09.043 --rc genhtml_legend=1 00:27:09.043 --rc geninfo_all_blocks=1 00:27:09.043 --rc geninfo_unexecuted_blocks=1 00:27:09.043 00:27:09.043 ' 00:27:09.043 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:09.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:09.043 --rc genhtml_branch_coverage=1 00:27:09.043 --rc genhtml_function_coverage=1 00:27:09.043 --rc genhtml_legend=1 00:27:09.043 --rc geninfo_all_blocks=1 00:27:09.043 --rc geninfo_unexecuted_blocks=1 00:27:09.043 00:27:09.043 ' 00:27:09.043 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:09.043 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:27:09.043 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:09.043 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:09.043 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:09.043 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:09.043 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:09.043 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:09.043 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:09.043 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:09.308 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:09.308 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:09.308 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:27:09.308 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=806d442c-08ba-4719-b76d-33169283459a 00:27:09.308 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:09.308 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:09.308 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:09.308 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:09.308 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:09.308 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:27:09.308 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:09.308 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:09.308 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:09.308 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.308 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.308 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.308 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:27:09.308 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.308 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:27:09.308 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:09.308 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:09.308 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:09.308 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:09.308 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:09.308 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:09.308 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:09.308 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:09.308 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:09.308 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:09.308 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:27:09.308 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:27:09.308 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:09.308 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:27:09.308 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:09.308 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:09.308 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:09.308 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:09.308 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:09.308 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:09.308 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:09.308 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:09.308 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:27:09.308 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:27:09.308 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:27:09.308 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:27:09.308 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:27:09.308 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:27:09.308 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:09.308 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:27:09.308 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:27:09.308 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:27:09.308 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:09.308 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:27:09.308 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:09.308 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:27:09.308 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:09.308 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:27:09.308 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:09.308 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:09.308 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:09.308 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:09.308 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:09.308 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:09.308 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:27:09.308 Cannot find device "nvmf_init_br" 00:27:09.308 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:27:09.309 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:27:09.309 Cannot find device "nvmf_init_br2" 00:27:09.309 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:27:09.309 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:27:09.309 Cannot find device "nvmf_tgt_br" 00:27:09.309 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:27:09.309 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:27:09.309 Cannot find device "nvmf_tgt_br2" 00:27:09.309 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:27:09.309 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:27:09.309 Cannot find device "nvmf_init_br" 00:27:09.309 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:27:09.309 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:27:09.309 Cannot find device "nvmf_init_br2" 00:27:09.309 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:27:09.309 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:27:09.309 Cannot find device "nvmf_tgt_br" 00:27:09.309 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:27:09.309 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:27:09.309 Cannot find device "nvmf_tgt_br2" 00:27:09.309 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:27:09.309 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:27:09.309 Cannot find device "nvmf_br" 00:27:09.309 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:27:09.309 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:27:09.309 Cannot find device "nvmf_init_if" 00:27:09.309 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:27:09.309 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:27:09.309 Cannot find device "nvmf_init_if2" 00:27:09.309 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:27:09.309 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:09.309 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:09.309 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:27:09.309 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:09.309 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:09.309 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:27:09.309 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:27:09.309 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:09.309 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:27:09.309 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:09.309 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:09.309 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:09.309 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:09.568 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:09.568 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:27:09.568 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:27:09.568 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:27:09.568 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:27:09.568 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:27:09.568 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:27:09.568 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:27:09.568 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:27:09.568 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:27:09.568 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:09.568 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:09.568 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:09.568 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:27:09.568 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:27:09.568 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:27:09.568 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:27:09.568 11:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:09.568 11:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:09.568 11:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:09.568 11:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:27:09.568 11:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:27:09.568 11:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:27:09.568 11:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:09.568 11:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:27:09.568 11:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:27:09.568 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:09.568 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:27:09.568 00:27:09.568 --- 10.0.0.3 ping statistics --- 00:27:09.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:09.568 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:27:09.568 11:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:27:09.568 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:27:09.568 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.078 ms 00:27:09.568 00:27:09.568 --- 10.0.0.4 ping statistics --- 00:27:09.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:09.568 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:27:09.568 11:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:09.568 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:09.568 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:27:09.568 00:27:09.568 --- 10.0.0.1 ping statistics --- 00:27:09.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:09.568 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:27:09.568 11:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:27:09.568 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:09.568 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 00:27:09.568 00:27:09.568 --- 10.0.0.2 ping statistics --- 00:27:09.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:09.568 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:27:09.568 11:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:09.568 11:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:27:09.568 11:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:09.568 11:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:09.568 11:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:09.568 11:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:09.568 11:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:09.569 11:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:09.569 11:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:09.569 11:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:27:09.569 11:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:09.569 11:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:09.569 11:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:27:09.569 11:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=103313 00:27:09.569 11:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 103313 00:27:09.569 11:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 103313 ']' 00:27:09.569 11:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:27:09.569 11:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:09.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:09.569 11:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:09.569 11:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:09.569 11:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:09.569 11:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:27:09.828 [2024-11-20 11:39:55.166530] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:09.828 [2024-11-20 11:39:55.167849] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:27:09.828 [2024-11-20 11:39:55.167930] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:09.828 [2024-11-20 11:39:55.325772] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:09.828 [2024-11-20 11:39:55.363539] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:09.828 [2024-11-20 11:39:55.363603] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:09.828 [2024-11-20 11:39:55.363629] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:09.828 [2024-11-20 11:39:55.363641] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:09.828 [2024-11-20 11:39:55.363650] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:09.828 [2024-11-20 11:39:55.363991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:10.086 [2024-11-20 11:39:55.418468] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:10.086 [2024-11-20 11:39:55.418905] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:10.086 11:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:10.086 11:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:27:10.086 11:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:10.086 11:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:10.086 11:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:27:10.086 11:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:10.086 11:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:10.086 11:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.086 11:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:27:10.086 [2024-11-20 11:39:55.496766] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:10.086 11:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.086 11:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:10.086 11:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.086 11:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:27:10.086 Malloc0 00:27:10.086 11:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.086 11:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:10.086 11:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.086 11:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:27:10.086 11:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.086 11:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:10.086 11:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.086 11:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:27:10.086 11:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.086 11:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:27:10.087 11:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.087 11:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:27:10.087 [2024-11-20 11:39:55.540798] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:10.087 11:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.087 11:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=103350 00:27:10.087 11:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:27:10.087 11:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:10.087 11:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 103350 /var/tmp/bdevperf.sock 00:27:10.087 11:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 103350 ']' 00:27:10.087 11:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:10.087 11:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:10.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:10.087 11:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:10.087 11:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:10.087 11:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:27:10.087 [2024-11-20 11:39:55.605186] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:27:10.087 [2024-11-20 11:39:55.605295] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103350 ] 00:27:10.345 [2024-11-20 11:39:55.759357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:10.345 [2024-11-20 11:39:55.795269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:10.345 11:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:10.345 11:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:27:10.345 11:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:10.345 11:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.345 11:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:27:10.604 NVMe0n1 00:27:10.604 11:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.604 11:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:10.604 Running I/O for 10 seconds... 00:27:12.916 7588.00 IOPS, 29.64 MiB/s [2024-11-20T11:39:59.439Z] 7984.00 IOPS, 31.19 MiB/s [2024-11-20T11:40:00.374Z] 7775.00 IOPS, 30.37 MiB/s [2024-11-20T11:40:01.325Z] 7787.75 IOPS, 30.42 MiB/s [2024-11-20T11:40:02.260Z] 7908.60 IOPS, 30.89 MiB/s [2024-11-20T11:40:03.197Z] 8028.33 IOPS, 31.36 MiB/s [2024-11-20T11:40:04.132Z] 8087.86 IOPS, 31.59 MiB/s [2024-11-20T11:40:05.509Z] 8186.12 IOPS, 31.98 MiB/s [2024-11-20T11:40:06.444Z] 8237.33 IOPS, 32.18 MiB/s [2024-11-20T11:40:06.444Z] 8298.30 IOPS, 32.42 MiB/s 00:27:20.855 Latency(us) 00:27:20.855 [2024-11-20T11:40:06.444Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:20.855 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:27:20.855 Verification LBA range: start 0x0 length 0x4000 00:27:20.855 NVMe0n1 : 10.08 8329.11 32.54 0.00 0.00 122390.64 27525.12 161099.40 00:27:20.855 [2024-11-20T11:40:06.444Z] =================================================================================================================== 00:27:20.855 [2024-11-20T11:40:06.444Z] Total : 8329.11 32.54 0.00 0.00 122390.64 27525.12 161099.40 00:27:20.855 { 00:27:20.855 "results": [ 00:27:20.855 { 00:27:20.855 "job": "NVMe0n1", 00:27:20.855 "core_mask": "0x1", 00:27:20.855 "workload": "verify", 00:27:20.855 "status": "finished", 00:27:20.855 "verify_range": { 00:27:20.855 "start": 0, 00:27:20.855 "length": 16384 00:27:20.855 }, 00:27:20.855 "queue_depth": 1024, 00:27:20.855 "io_size": 4096, 00:27:20.855 "runtime": 10.082944, 00:27:20.855 "iops": 8329.114988638239, 00:27:20.855 "mibps": 32.53560542436812, 00:27:20.855 "io_failed": 0, 00:27:20.855 "io_timeout": 0, 00:27:20.855 "avg_latency_us": 122390.6433643573, 00:27:20.855 "min_latency_us": 27525.12, 00:27:20.855 "max_latency_us": 161099.40363636363 00:27:20.855 } 00:27:20.855 ], 00:27:20.855 "core_count": 1 00:27:20.855 } 00:27:20.855 11:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 103350 00:27:20.855 11:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 103350 ']' 00:27:20.855 11:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 103350 00:27:20.855 11:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:27:20.855 11:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:20.855 11:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 103350 00:27:20.855 11:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:20.855 11:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:20.855 killing process with pid 103350 00:27:20.855 11:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 103350' 00:27:20.855 Received shutdown signal, test time was about 10.000000 seconds 00:27:20.855 00:27:20.855 Latency(us) 00:27:20.855 [2024-11-20T11:40:06.445Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:20.856 [2024-11-20T11:40:06.445Z] =================================================================================================================== 00:27:20.856 [2024-11-20T11:40:06.445Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:20.856 11:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 103350 00:27:20.856 11:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 103350 00:27:20.856 11:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:27:20.856 11:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:27:20.856 11:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:20.856 11:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:27:20.856 11:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:20.856 11:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:27:20.856 11:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:20.856 11:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:20.856 rmmod nvme_tcp 00:27:20.856 rmmod nvme_fabrics 00:27:20.856 rmmod nvme_keyring 00:27:21.127 11:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:21.127 11:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:27:21.127 11:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:27:21.127 11:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 103313 ']' 00:27:21.127 11:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 103313 00:27:21.127 11:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 103313 ']' 00:27:21.127 11:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 103313 00:27:21.127 11:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:27:21.127 11:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:21.127 11:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 103313 00:27:21.127 11:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:21.127 11:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:21.127 killing process with pid 103313 00:27:21.127 11:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 103313' 00:27:21.127 11:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 103313 00:27:21.127 11:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 103313 00:27:21.127 11:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:21.127 11:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:21.127 11:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:21.127 11:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:27:21.127 11:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:27:21.127 11:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:21.127 11:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:27:21.127 11:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:21.127 11:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:27:21.127 11:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:27:21.127 11:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:27:21.395 11:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:27:21.395 11:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:27:21.395 11:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:27:21.395 11:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:27:21.395 11:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:27:21.395 11:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:27:21.395 11:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:27:21.395 11:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:27:21.395 11:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:27:21.395 11:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:21.395 11:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:21.395 11:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:27:21.395 11:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:21.395 11:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:21.395 11:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:21.395 11:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:27:21.395 00:27:21.395 real 0m12.496s 00:27:21.395 user 0m20.811s 00:27:21.395 sys 0m2.233s 00:27:21.395 11:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:21.395 ************************************ 00:27:21.395 11:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:27:21.395 END TEST nvmf_queue_depth 00:27:21.395 ************************************ 00:27:21.395 11:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:27:21.395 11:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:21.395 11:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:21.395 11:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:21.395 ************************************ 00:27:21.395 START TEST nvmf_target_multipath 00:27:21.395 ************************************ 00:27:21.395 11:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:27:21.655 * Looking for test storage... 00:27:21.655 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:27:21.655 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:21.655 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:27:21.655 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:21.655 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:21.655 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:21.655 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:21.655 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:21.655 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:27:21.655 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:27:21.655 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:27:21.655 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:27:21.655 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:27:21.655 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:27:21.655 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:27:21.655 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:21.655 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:27:21.655 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:27:21.655 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:21.655 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:21.655 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:27:21.655 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:27:21.655 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:21.655 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:27:21.655 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:27:21.655 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:27:21.655 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:27:21.655 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:21.655 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:27:21.655 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:27:21.655 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:21.655 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:21.655 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:27:21.655 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:21.655 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:21.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:21.655 --rc genhtml_branch_coverage=1 00:27:21.655 --rc genhtml_function_coverage=1 00:27:21.655 --rc genhtml_legend=1 00:27:21.655 --rc geninfo_all_blocks=1 00:27:21.655 --rc geninfo_unexecuted_blocks=1 00:27:21.655 00:27:21.655 ' 00:27:21.655 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:21.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:21.655 --rc genhtml_branch_coverage=1 00:27:21.655 --rc genhtml_function_coverage=1 00:27:21.655 --rc genhtml_legend=1 00:27:21.655 --rc geninfo_all_blocks=1 00:27:21.655 --rc geninfo_unexecuted_blocks=1 00:27:21.655 00:27:21.655 ' 00:27:21.656 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:21.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:21.656 --rc genhtml_branch_coverage=1 00:27:21.656 --rc genhtml_function_coverage=1 00:27:21.656 --rc genhtml_legend=1 00:27:21.656 --rc geninfo_all_blocks=1 00:27:21.656 --rc geninfo_unexecuted_blocks=1 00:27:21.656 00:27:21.656 ' 00:27:21.656 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:21.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:21.656 --rc genhtml_branch_coverage=1 00:27:21.656 --rc genhtml_function_coverage=1 00:27:21.656 --rc genhtml_legend=1 00:27:21.656 --rc geninfo_all_blocks=1 00:27:21.656 --rc geninfo_unexecuted_blocks=1 00:27:21.656 00:27:21.656 ' 00:27:21.656 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:21.656 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:27:21.656 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:21.656 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:21.656 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:21.656 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:21.656 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:21.656 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:21.656 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:21.656 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:21.656 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:21.656 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:21.656 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:27:21.656 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=806d442c-08ba-4719-b76d-33169283459a 00:27:21.656 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:21.656 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:21.656 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:21.656 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:21.656 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:21.656 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:27:21.656 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:21.656 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:21.656 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:21.656 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.656 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.656 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.656 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:27:21.656 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.656 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:27:21.656 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:21.656 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:21.656 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:21.656 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:21.656 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:21.656 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:21.656 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:21.656 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:21.656 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:21.656 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:21.656 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:21.656 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:21.656 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:27:21.656 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:21.656 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:27:21.656 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:21.656 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:21.656 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:21.656 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:21.656 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:21.656 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:21.656 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:21.656 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:21.656 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:27:21.656 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:27:21.656 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:27:21.656 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:27:21.656 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:27:21.656 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:27:21.656 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:21.656 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:27:21.657 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:27:21.657 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:27:21.657 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:21.657 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:27:21.657 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:21.657 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:27:21.657 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:21.657 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:27:21.657 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:21.657 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:21.657 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:21.657 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:21.657 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:21.657 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:21.657 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:27:21.657 Cannot find device "nvmf_init_br" 00:27:21.657 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:27:21.657 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:27:21.657 Cannot find device "nvmf_init_br2" 00:27:21.916 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:27:21.916 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:27:21.916 Cannot find device "nvmf_tgt_br" 00:27:21.916 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:27:21.916 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:27:21.916 Cannot find device "nvmf_tgt_br2" 00:27:21.916 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:27:21.916 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:27:21.916 Cannot find device "nvmf_init_br" 00:27:21.916 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:27:21.916 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:27:21.916 Cannot find device "nvmf_init_br2" 00:27:21.916 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:27:21.916 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:27:21.916 Cannot find device "nvmf_tgt_br" 00:27:21.916 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:27:21.916 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:27:21.916 Cannot find device "nvmf_tgt_br2" 00:27:21.916 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:27:21.916 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:27:21.916 Cannot find device "nvmf_br" 00:27:21.916 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:27:21.916 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:27:21.916 Cannot find device "nvmf_init_if" 00:27:21.916 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:27:21.916 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:27:21.916 Cannot find device "nvmf_init_if2" 00:27:21.916 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:27:21.916 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:21.916 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:21.916 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:27:21.916 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:21.916 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:21.916 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:27:21.916 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:27:21.916 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:21.916 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:27:21.916 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:21.916 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:21.916 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:21.916 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:21.916 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:21.916 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:27:21.916 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:27:21.916 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:27:21.916 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:27:21.916 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:27:21.916 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:27:21.916 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:27:21.916 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:27:21.916 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:27:21.916 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:21.916 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:22.186 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:22.186 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:27:22.186 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:27:22.186 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:27:22.186 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:27:22.186 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:22.186 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:22.186 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:22.186 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:27:22.186 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:27:22.186 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:27:22.186 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:22.186 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:27:22.186 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:27:22.186 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:22.187 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:27:22.187 00:27:22.187 --- 10.0.0.3 ping statistics --- 00:27:22.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:22.187 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:27:22.187 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:27:22.187 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:27:22.187 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.077 ms 00:27:22.187 00:27:22.187 --- 10.0.0.4 ping statistics --- 00:27:22.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:22.187 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:27:22.187 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:22.187 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:22.187 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:27:22.187 00:27:22.187 --- 10.0.0.1 ping statistics --- 00:27:22.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:22.187 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:27:22.187 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:27:22.187 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:22.187 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:27:22.187 00:27:22.187 --- 10.0.0.2 ping statistics --- 00:27:22.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:22.187 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:27:22.187 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:22.187 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:27:22.187 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:22.187 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:22.187 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:22.187 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:22.187 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:22.187 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:22.187 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:22.187 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:27:22.187 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:27:22.187 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:27:22.187 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:22.187 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:22.187 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:27:22.187 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=103716 00:27:22.187 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:27:22.187 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 103716 00:27:22.187 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 103716 ']' 00:27:22.187 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:22.187 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:22.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:22.187 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:22.187 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:22.187 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:27:22.187 [2024-11-20 11:40:07.715418] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:22.187 [2024-11-20 11:40:07.716851] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:27:22.187 [2024-11-20 11:40:07.716918] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:22.445 [2024-11-20 11:40:07.880002] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:22.445 [2024-11-20 11:40:07.923586] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:22.445 [2024-11-20 11:40:07.923680] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:22.445 [2024-11-20 11:40:07.923700] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:22.445 [2024-11-20 11:40:07.923715] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:22.445 [2024-11-20 11:40:07.923728] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:22.445 [2024-11-20 11:40:07.924656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:22.445 [2024-11-20 11:40:07.927656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:22.445 [2024-11-20 11:40:07.927805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:22.445 [2024-11-20 11:40:07.927827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:22.445 [2024-11-20 11:40:07.986449] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:22.445 [2024-11-20 11:40:07.986769] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:22.445 [2024-11-20 11:40:07.986812] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:22.445 [2024-11-20 11:40:07.987064] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:22.445 [2024-11-20 11:40:07.987190] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:27:22.445 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:22.445 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0 00:27:22.445 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:22.445 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:22.445 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:27:22.703 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:22.703 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:22.961 [2024-11-20 11:40:08.368845] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:22.961 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:23.218 Malloc0 00:27:23.218 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:27:23.476 11:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:24.042 11:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:27:24.301 [2024-11-20 11:40:09.668887] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:24.301 11:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:27:24.558 [2024-11-20 11:40:09.968809] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:27:24.558 11:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid=806d442c-08ba-4719-b76d-33169283459a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:27:24.558 11:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid=806d442c-08ba-4719-b76d-33169283459a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:27:24.816 11:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:27:24.816 11:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0 00:27:24.816 11:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:27:24.816 11:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:27:24.816 11:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2 00:27:26.718 11:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:27:26.718 11:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:27:26.718 11:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:27:26.718 11:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:27:26.718 11:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:27:26.718 11:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0 00:27:26.718 11:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:27:26.718 11:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:27:26.718 11:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:27:26.718 11:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:27:26.718 11:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:27:26.718 11:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:27:26.718 11:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:27:26.718 11:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:27:26.718 11:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:27:26.718 11:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:27:26.718 11:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:27:26.718 11:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:27:26.718 11:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:27:26.718 11:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:27:26.718 11:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:27:26.718 11:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:27:26.718 11:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:27:26.718 11:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:27:26.718 11:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:27:26.718 11:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:27:26.718 11:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:27:26.718 11:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:27:26.718 11:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:27:26.718 11:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:27:26.718 11:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:27:26.718 11:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:27:26.718 11:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=103842 00:27:26.718 11:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:27:26.718 11:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:27:26.718 [global] 00:27:26.718 thread=1 00:27:26.718 invalidate=1 00:27:26.718 rw=randrw 00:27:26.718 time_based=1 00:27:26.718 runtime=6 00:27:26.718 ioengine=libaio 00:27:26.718 direct=1 00:27:26.718 bs=4096 00:27:26.718 iodepth=128 00:27:26.718 norandommap=0 00:27:26.718 numjobs=1 00:27:26.718 00:27:26.718 verify_dump=1 00:27:26.718 verify_backlog=512 00:27:26.718 verify_state_save=0 00:27:26.718 do_verify=1 00:27:26.718 verify=crc32c-intel 00:27:26.718 [job0] 00:27:26.719 filename=/dev/nvme0n1 00:27:27.025 Could not set queue depth (nvme0n1) 00:27:27.025 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:27:27.025 fio-3.35 00:27:27.025 Starting 1 thread 00:27:27.995 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:27:28.253 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:27:28.511 11:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:27:28.511 11:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:27:28.511 11:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:27:28.511 11:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:27:28.511 11:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:27:28.511 11:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:27:28.511 11:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:27:28.511 11:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:27:28.511 11:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:27:28.511 11:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:27:28.511 11:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:27:28.511 11:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:27:28.511 11:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:27:29.886 11:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:27:29.886 11:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:27:29.886 11:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:27:29.886 11:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:27:29.886 11:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:27:30.145 11:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:27:30.145 11:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:27:30.145 11:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:27:30.145 11:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:27:30.145 11:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:27:30.145 11:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:27:30.145 11:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:27:30.145 11:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:27:30.145 11:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:27:30.145 11:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:27:30.145 11:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:27:30.145 11:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:27:30.145 11:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:27:31.521 11:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:27:31.521 11:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:27:31.521 11:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:27:31.521 11:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 103842 00:27:33.424 00:27:33.424 job0: (groupid=0, jobs=1): err= 0: pid=103863: Wed Nov 20 11:40:18 2024 00:27:33.424 read: IOPS=10.5k, BW=40.9MiB/s (42.9MB/s)(246MiB/6018msec) 00:27:33.424 slat (usec): min=2, max=7256, avg=53.62, stdev=262.17 00:27:33.424 clat (usec): min=852, max=49116, avg=8158.75, stdev=1713.17 00:27:33.424 lat (usec): min=956, max=49136, avg=8212.37, stdev=1725.20 00:27:33.424 clat percentiles (usec): 00:27:33.424 | 1.00th=[ 4621], 5.00th=[ 5932], 10.00th=[ 6587], 20.00th=[ 7177], 00:27:33.424 | 30.00th=[ 7439], 40.00th=[ 7701], 50.00th=[ 7963], 60.00th=[ 8291], 00:27:33.424 | 70.00th=[ 8586], 80.00th=[ 9110], 90.00th=[ 9896], 95.00th=[10945], 00:27:33.424 | 99.00th=[13173], 99.50th=[14353], 99.90th=[20317], 99.95th=[24249], 00:27:33.424 | 99.99th=[49021] 00:27:33.424 bw ( KiB/s): min=10928, max=27264, per=52.94%, avg=22181.00, stdev=5118.57, samples=12 00:27:33.424 iops : min= 2732, max= 6816, avg=5545.25, stdev=1279.64, samples=12 00:27:33.424 write: IOPS=6138, BW=24.0MiB/s (25.1MB/s)(130MiB/5436msec); 0 zone resets 00:27:33.424 slat (usec): min=3, max=9138, avg=66.72, stdev=162.90 00:27:33.424 clat (usec): min=592, max=24297, avg=7442.44, stdev=1316.91 00:27:33.424 lat (usec): min=669, max=24408, avg=7509.16, stdev=1321.82 00:27:33.424 clat percentiles (usec): 00:27:33.424 | 1.00th=[ 4015], 5.00th=[ 5407], 10.00th=[ 6325], 20.00th=[ 6783], 00:27:33.424 | 30.00th=[ 7046], 40.00th=[ 7242], 50.00th=[ 7439], 60.00th=[ 7635], 00:27:33.424 | 70.00th=[ 7832], 80.00th=[ 8029], 90.00th=[ 8455], 95.00th=[ 9241], 00:27:33.424 | 99.00th=[11469], 99.50th=[12649], 99.90th=[19268], 99.95th=[22152], 00:27:33.424 | 99.99th=[24249] 00:27:33.424 bw ( KiB/s): min=11248, max=26872, per=90.45%, avg=22211.50, stdev=4944.66, samples=12 00:27:33.424 iops : min= 2812, max= 6718, avg=5552.83, stdev=1236.12, samples=12 00:27:33.424 lat (usec) : 750=0.01%, 1000=0.01% 00:27:33.424 lat (msec) : 2=0.08%, 4=0.56%, 10=92.14%, 20=7.11%, 50=0.11% 00:27:33.424 cpu : usr=5.65%, sys=24.31%, ctx=7762, majf=0, minf=78 00:27:33.424 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:27:33.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.424 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:33.424 issued rwts: total=63037,33370,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:33.424 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:33.424 00:27:33.424 Run status group 0 (all jobs): 00:27:33.424 READ: bw=40.9MiB/s (42.9MB/s), 40.9MiB/s-40.9MiB/s (42.9MB/s-42.9MB/s), io=246MiB (258MB), run=6018-6018msec 00:27:33.424 WRITE: bw=24.0MiB/s (25.1MB/s), 24.0MiB/s-24.0MiB/s (25.1MB/s-25.1MB/s), io=130MiB (137MB), run=5436-5436msec 00:27:33.424 00:27:33.424 Disk stats (read/write): 00:27:33.424 nvme0n1: ios=62116/32711, merge=0/0, ticks=472938/231079, in_queue=704017, util=98.60% 00:27:33.424 11:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:27:33.424 11:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:27:33.701 11:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:27:33.701 11:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:27:33.701 11:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:27:33.702 11:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:27:33.702 11:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:27:33.702 11:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:27:33.702 11:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:27:33.702 11:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:27:33.702 11:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:27:33.702 11:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:27:33.702 11:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:27:33.702 11:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:27:33.702 11:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:27:34.677 11:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:27:34.677 11:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:27:34.677 11:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:27:34.677 11:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:27:34.677 11:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=103991 00:27:34.677 11:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:27:34.677 11:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:27:34.677 [global] 00:27:34.677 thread=1 00:27:34.677 invalidate=1 00:27:34.677 rw=randrw 00:27:34.677 time_based=1 00:27:34.677 runtime=6 00:27:34.677 ioengine=libaio 00:27:34.677 direct=1 00:27:34.677 bs=4096 00:27:34.677 iodepth=128 00:27:34.677 norandommap=0 00:27:34.677 numjobs=1 00:27:34.677 00:27:34.677 verify_dump=1 00:27:34.677 verify_backlog=512 00:27:34.677 verify_state_save=0 00:27:34.677 do_verify=1 00:27:34.677 verify=crc32c-intel 00:27:34.677 [job0] 00:27:34.677 filename=/dev/nvme0n1 00:27:34.677 Could not set queue depth (nvme0n1) 00:27:34.936 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:27:34.936 fio-3.35 00:27:34.936 Starting 1 thread 00:27:35.872 11:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:27:36.130 11:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:27:36.389 11:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:27:36.389 11:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:27:36.389 11:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:27:36.389 11:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:27:36.389 11:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:27:36.389 11:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:27:36.389 11:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:27:36.389 11:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:27:36.389 11:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:27:36.389 11:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:27:36.389 11:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:27:36.389 11:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:27:36.389 11:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:27:37.323 11:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:27:37.323 11:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:27:37.323 11:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:27:37.323 11:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:27:37.582 11:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:27:38.149 11:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:27:38.149 11:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:27:38.149 11:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:27:38.149 11:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:27:38.149 11:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:27:38.149 11:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:27:38.150 11:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:27:38.150 11:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:27:38.150 11:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:27:38.150 11:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:27:38.150 11:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:27:38.150 11:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:27:38.150 11:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:27:39.085 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:27:39.085 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:27:39.085 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:27:39.085 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 103991 00:27:40.999 00:27:40.999 job0: (groupid=0, jobs=1): err= 0: pid=104012: Wed Nov 20 11:40:26 2024 00:27:40.999 read: IOPS=12.0k, BW=46.7MiB/s (49.0MB/s)(281MiB/6007msec) 00:27:40.999 slat (usec): min=2, max=9706, avg=41.53, stdev=222.24 00:27:40.999 clat (usec): min=225, max=35612, avg=7153.68, stdev=2090.94 00:27:40.999 lat (usec): min=245, max=35630, avg=7195.21, stdev=2108.22 00:27:40.999 clat percentiles (usec): 00:27:40.999 | 1.00th=[ 1909], 5.00th=[ 3818], 10.00th=[ 4621], 20.00th=[ 5669], 00:27:40.999 | 30.00th=[ 6587], 40.00th=[ 7046], 50.00th=[ 7373], 60.00th=[ 7635], 00:27:40.999 | 70.00th=[ 7898], 80.00th=[ 8291], 90.00th=[ 8979], 95.00th=[ 9896], 00:27:40.999 | 99.00th=[11863], 99.50th=[13042], 99.90th=[25560], 99.95th=[28705], 00:27:40.999 | 99.99th=[33424] 00:27:40.999 bw ( KiB/s): min= 6424, max=35920, per=54.79%, avg=26214.55, stdev=8474.92, samples=11 00:27:40.999 iops : min= 1606, max= 8980, avg=6553.64, stdev=2118.73, samples=11 00:27:40.999 write: IOPS=7301, BW=28.5MiB/s (29.9MB/s)(151MiB/5293msec); 0 zone resets 00:27:40.999 slat (usec): min=12, max=2301, avg=54.60, stdev=125.26 00:27:40.999 clat (usec): min=144, max=33415, avg=6429.56, stdev=2040.92 00:27:40.999 lat (usec): min=179, max=33441, avg=6484.15, stdev=2052.41 00:27:40.999 clat percentiles (usec): 00:27:40.999 | 1.00th=[ 1319], 5.00th=[ 3195], 10.00th=[ 3818], 20.00th=[ 4752], 00:27:40.999 | 30.00th=[ 5800], 40.00th=[ 6587], 50.00th=[ 6849], 60.00th=[ 7111], 00:27:40.999 | 70.00th=[ 7308], 80.00th=[ 7570], 90.00th=[ 7898], 95.00th=[ 8455], 00:27:40.999 | 99.00th=[11207], 99.50th=[13042], 99.90th=[23987], 99.95th=[24511], 00:27:40.999 | 99.99th=[32900] 00:27:40.999 bw ( KiB/s): min= 6816, max=36717, per=89.68%, avg=26191.00, stdev=8288.36, samples=11 00:27:40.999 iops : min= 1704, max= 9179, avg=6547.73, stdev=2072.06, samples=11 00:27:40.999 lat (usec) : 250=0.01%, 500=0.15%, 750=0.18%, 1000=0.21% 00:27:40.999 lat (msec) : 2=0.65%, 4=6.86%, 10=88.08%, 20=3.55%, 50=0.30% 00:27:40.999 cpu : usr=6.08%, sys=25.31%, ctx=9421, majf=0, minf=127 00:27:40.999 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:27:40.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:40.999 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:40.999 issued rwts: total=71846,38646,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:40.999 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:40.999 00:27:40.999 Run status group 0 (all jobs): 00:27:40.999 READ: bw=46.7MiB/s (49.0MB/s), 46.7MiB/s-46.7MiB/s (49.0MB/s-49.0MB/s), io=281MiB (294MB), run=6007-6007msec 00:27:40.999 WRITE: bw=28.5MiB/s (29.9MB/s), 28.5MiB/s-28.5MiB/s (29.9MB/s-29.9MB/s), io=151MiB (158MB), run=5293-5293msec 00:27:40.999 00:27:40.999 Disk stats (read/write): 00:27:40.999 nvme0n1: ios=70289/38565, merge=0/0, ticks=470650/234078, in_queue=704728, util=98.61% 00:27:40.999 11:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:40.999 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:27:40.999 11:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:27:40.999 11:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0 00:27:40.999 11:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:40.999 11:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:41.258 11:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:41.258 11:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:41.258 11:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0 00:27:41.258 11:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:41.516 11:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:27:41.516 11:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:27:41.516 11:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:27:41.516 11:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:27:41.516 11:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:41.516 11:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:27:41.516 11:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:41.516 11:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:27:41.516 11:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:41.516 11:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:41.516 rmmod nvme_tcp 00:27:41.516 rmmod nvme_fabrics 00:27:41.516 rmmod nvme_keyring 00:27:41.516 11:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:41.516 11:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:27:41.516 11:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:27:41.516 11:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 103716 ']' 00:27:41.516 11:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 103716 00:27:41.516 11:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 103716 ']' 00:27:41.516 11:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 103716 00:27:41.516 11:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname 00:27:41.516 11:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:41.516 11:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 103716 00:27:41.516 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:41.516 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:41.516 killing process with pid 103716 00:27:41.516 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 103716' 00:27:41.516 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 103716 00:27:41.516 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 103716 00:27:41.775 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:41.775 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:41.775 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:41.775 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:27:41.775 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:41.775 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:27:41.775 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:27:41.775 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:41.775 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:27:41.775 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:27:41.775 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:27:41.775 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:27:41.775 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:27:41.775 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:27:41.775 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:27:41.775 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:27:41.775 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:27:41.775 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:27:41.775 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:27:41.775 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:27:42.034 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:42.034 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:42.034 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:27:42.034 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:42.034 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:42.034 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:42.034 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:27:42.034 00:27:42.034 real 0m20.469s 00:27:42.034 user 1m10.786s 00:27:42.034 sys 0m9.841s 00:27:42.034 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:42.034 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:27:42.034 ************************************ 00:27:42.034 END TEST nvmf_target_multipath 00:27:42.034 ************************************ 00:27:42.034 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:27:42.034 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:42.034 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:42.034 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:42.034 ************************************ 00:27:42.034 START TEST nvmf_zcopy 00:27:42.034 ************************************ 00:27:42.034 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:27:42.034 * Looking for test storage... 00:27:42.034 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:27:42.034 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:42.034 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:27:42.034 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:42.293 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:42.293 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:42.293 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:42.293 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:42.293 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:27:42.293 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:27:42.293 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:27:42.293 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:27:42.293 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:27:42.293 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:27:42.293 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:27:42.293 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:42.293 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:27:42.294 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:27:42.294 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:42.294 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:42.294 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:27:42.294 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:27:42.294 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:42.294 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:27:42.294 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:27:42.294 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:27:42.294 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:27:42.294 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:42.294 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:27:42.294 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:27:42.294 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:42.294 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:42.294 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:27:42.294 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:42.294 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:42.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:42.294 --rc genhtml_branch_coverage=1 00:27:42.294 --rc genhtml_function_coverage=1 00:27:42.294 --rc genhtml_legend=1 00:27:42.294 --rc geninfo_all_blocks=1 00:27:42.294 --rc geninfo_unexecuted_blocks=1 00:27:42.294 00:27:42.294 ' 00:27:42.294 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:42.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:42.294 --rc genhtml_branch_coverage=1 00:27:42.294 --rc genhtml_function_coverage=1 00:27:42.294 --rc genhtml_legend=1 00:27:42.294 --rc geninfo_all_blocks=1 00:27:42.294 --rc geninfo_unexecuted_blocks=1 00:27:42.294 00:27:42.294 ' 00:27:42.294 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:42.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:42.294 --rc genhtml_branch_coverage=1 00:27:42.294 --rc genhtml_function_coverage=1 00:27:42.294 --rc genhtml_legend=1 00:27:42.294 --rc geninfo_all_blocks=1 00:27:42.294 --rc geninfo_unexecuted_blocks=1 00:27:42.294 00:27:42.294 ' 00:27:42.294 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:42.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:42.294 --rc genhtml_branch_coverage=1 00:27:42.294 --rc genhtml_function_coverage=1 00:27:42.294 --rc genhtml_legend=1 00:27:42.294 --rc geninfo_all_blocks=1 00:27:42.294 --rc geninfo_unexecuted_blocks=1 00:27:42.294 00:27:42.294 ' 00:27:42.294 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:42.294 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:27:42.294 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:42.294 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:42.294 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:42.294 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:42.294 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:42.294 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:42.294 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:42.294 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:42.294 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:42.294 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:42.294 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:27:42.294 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=806d442c-08ba-4719-b76d-33169283459a 00:27:42.294 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:42.294 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:42.294 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:42.294 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:42.294 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:42.294 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:27:42.294 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:42.294 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:42.294 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:42.294 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:42.294 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:42.294 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:42.294 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:27:42.294 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:42.294 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:27:42.294 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:42.294 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:42.294 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:42.294 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:42.294 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:42.294 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:42.294 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:42.294 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:42.294 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:42.294 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:42.295 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:27:42.295 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:42.295 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:42.295 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:42.295 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:42.295 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:42.295 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:42.295 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:42.295 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:42.295 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:27:42.295 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:27:42.295 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:27:42.295 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:27:42.295 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:27:42.295 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:27:42.295 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:42.295 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:27:42.295 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:27:42.295 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:27:42.295 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:42.295 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:27:42.295 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:42.295 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:27:42.295 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:42.295 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:27:42.295 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:42.295 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:42.295 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:42.295 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:42.295 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:42.295 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:42.295 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:27:42.295 Cannot find device "nvmf_init_br" 00:27:42.295 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:27:42.295 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:27:42.295 Cannot find device "nvmf_init_br2" 00:27:42.295 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:27:42.295 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:27:42.295 Cannot find device "nvmf_tgt_br" 00:27:42.295 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:27:42.295 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:27:42.295 Cannot find device "nvmf_tgt_br2" 00:27:42.295 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:27:42.295 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:27:42.295 Cannot find device "nvmf_init_br" 00:27:42.295 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:27:42.295 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:27:42.295 Cannot find device "nvmf_init_br2" 00:27:42.295 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:27:42.295 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:27:42.295 Cannot find device "nvmf_tgt_br" 00:27:42.295 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:27:42.295 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:27:42.295 Cannot find device "nvmf_tgt_br2" 00:27:42.295 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:27:42.295 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:27:42.295 Cannot find device "nvmf_br" 00:27:42.295 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:27:42.295 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:27:42.295 Cannot find device "nvmf_init_if" 00:27:42.295 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:27:42.295 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:27:42.295 Cannot find device "nvmf_init_if2" 00:27:42.295 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:27:42.295 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:42.295 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:42.295 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:27:42.295 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:42.295 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:42.295 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:27:42.295 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:27:42.295 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:42.295 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:27:42.553 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:42.553 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:42.553 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:42.553 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:42.553 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:42.554 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:27:42.554 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:27:42.554 11:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:27:42.554 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:27:42.554 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:27:42.554 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:27:42.554 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:27:42.554 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:27:42.554 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:27:42.554 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:42.554 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:42.554 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:42.554 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:27:42.554 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:27:42.554 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:27:42.554 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:27:42.554 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:42.554 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:42.554 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:42.554 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:27:42.554 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:27:42.554 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:27:42.554 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:42.554 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:27:42.554 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:27:42.554 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:42.554 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:27:42.554 00:27:42.554 --- 10.0.0.3 ping statistics --- 00:27:42.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:42.554 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:27:42.554 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:27:42.554 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:27:42.554 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:27:42.554 00:27:42.554 --- 10.0.0.4 ping statistics --- 00:27:42.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:42.554 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:27:42.554 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:42.554 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:42.554 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:27:42.554 00:27:42.554 --- 10.0.0.1 ping statistics --- 00:27:42.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:42.554 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:27:42.554 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:27:42.554 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:42.554 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:27:42.554 00:27:42.554 --- 10.0.0.2 ping statistics --- 00:27:42.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:42.554 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:27:42.554 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:42.554 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:27:42.554 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:42.812 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:42.813 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:42.813 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:42.813 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:42.813 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:42.813 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:42.813 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:27:42.813 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:42.813 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:42.813 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:27:42.813 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=104342 00:27:42.813 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:27:42.813 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 104342 00:27:42.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:42.813 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 104342 ']' 00:27:42.813 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:42.813 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:42.813 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:42.813 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:42.813 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:27:42.813 [2024-11-20 11:40:28.221880] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:42.813 [2024-11-20 11:40:28.223123] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:27:42.813 [2024-11-20 11:40:28.223194] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:42.813 [2024-11-20 11:40:28.375454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:43.071 [2024-11-20 11:40:28.413688] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:43.071 [2024-11-20 11:40:28.413991] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:43.071 [2024-11-20 11:40:28.414016] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:43.071 [2024-11-20 11:40:28.414029] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:43.071 [2024-11-20 11:40:28.414038] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:43.071 [2024-11-20 11:40:28.414376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:43.072 [2024-11-20 11:40:28.469740] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:43.072 [2024-11-20 11:40:28.470139] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:43.072 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:43.072 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:27:43.072 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:43.072 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:43.072 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:27:43.072 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:43.072 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:27:43.072 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:27:43.072 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.072 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:27:43.072 [2024-11-20 11:40:28.551168] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:43.072 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.072 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:27:43.072 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.072 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:27:43.072 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.072 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:27:43.072 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.072 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:27:43.072 [2024-11-20 11:40:28.567243] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:43.072 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.072 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:27:43.072 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.072 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:27:43.072 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.072 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:27:43.072 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.072 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:27:43.072 malloc0 00:27:43.072 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.072 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:27:43.072 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.072 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:27:43.072 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.072 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:27:43.072 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:27:43.072 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:27:43.072 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:27:43.072 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:43.072 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:43.072 { 00:27:43.072 "params": { 00:27:43.072 "name": "Nvme$subsystem", 00:27:43.072 "trtype": "$TEST_TRANSPORT", 00:27:43.072 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:43.072 "adrfam": "ipv4", 00:27:43.072 "trsvcid": "$NVMF_PORT", 00:27:43.072 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:43.072 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:43.072 "hdgst": ${hdgst:-false}, 00:27:43.072 "ddgst": ${ddgst:-false} 00:27:43.072 }, 00:27:43.072 "method": "bdev_nvme_attach_controller" 00:27:43.072 } 00:27:43.072 EOF 00:27:43.072 )") 00:27:43.072 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:27:43.072 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:27:43.072 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:27:43.072 11:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:43.072 "params": { 00:27:43.072 "name": "Nvme1", 00:27:43.072 "trtype": "tcp", 00:27:43.072 "traddr": "10.0.0.3", 00:27:43.072 "adrfam": "ipv4", 00:27:43.072 "trsvcid": "4420", 00:27:43.072 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:43.072 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:43.072 "hdgst": false, 00:27:43.072 "ddgst": false 00:27:43.072 }, 00:27:43.072 "method": "bdev_nvme_attach_controller" 00:27:43.072 }' 00:27:43.332 [2024-11-20 11:40:28.664722] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:27:43.332 [2024-11-20 11:40:28.664822] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104374 ] 00:27:43.332 [2024-11-20 11:40:28.818110] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:43.332 [2024-11-20 11:40:28.857591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:43.590 Running I/O for 10 seconds... 00:27:45.460 5613.00 IOPS, 43.85 MiB/s [2024-11-20T11:40:32.007Z] 5675.50 IOPS, 44.34 MiB/s [2024-11-20T11:40:33.380Z] 5686.00 IOPS, 44.42 MiB/s [2024-11-20T11:40:34.313Z] 5690.25 IOPS, 44.46 MiB/s [2024-11-20T11:40:35.247Z] 5705.20 IOPS, 44.57 MiB/s [2024-11-20T11:40:36.182Z] 5709.00 IOPS, 44.60 MiB/s [2024-11-20T11:40:37.117Z] 5706.29 IOPS, 44.58 MiB/s [2024-11-20T11:40:38.051Z] 5715.75 IOPS, 44.65 MiB/s [2024-11-20T11:40:39.437Z] 5712.00 IOPS, 44.62 MiB/s [2024-11-20T11:40:39.437Z] 5714.10 IOPS, 44.64 MiB/s 00:27:53.848 Latency(us) 00:27:53.848 [2024-11-20T11:40:39.437Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:53.848 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:27:53.848 Verification LBA range: start 0x0 length 0x1000 00:27:53.848 Nvme1n1 : 10.01 5716.85 44.66 0.00 0.00 22317.13 1206.46 32648.84 00:27:53.848 [2024-11-20T11:40:39.437Z] =================================================================================================================== 00:27:53.848 [2024-11-20T11:40:39.437Z] Total : 5716.85 44.66 0.00 0.00 22317.13 1206.46 32648.84 00:27:53.848 11:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=104481 00:27:53.848 11:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:27:53.848 11:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:27:53.848 11:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:27:53.848 11:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:27:53.848 11:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:27:53.848 11:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:27:53.848 11:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:53.848 11:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:53.848 { 00:27:53.848 "params": { 00:27:53.848 "name": "Nvme$subsystem", 00:27:53.848 "trtype": "$TEST_TRANSPORT", 00:27:53.848 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:53.848 "adrfam": "ipv4", 00:27:53.848 "trsvcid": "$NVMF_PORT", 00:27:53.848 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:53.848 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:53.848 "hdgst": ${hdgst:-false}, 00:27:53.848 "ddgst": ${ddgst:-false} 00:27:53.848 }, 00:27:53.848 "method": "bdev_nvme_attach_controller" 00:27:53.848 } 00:27:53.848 EOF 00:27:53.848 )") 00:27:53.848 11:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:27:53.848 [2024-11-20 11:40:39.154935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:53.848 [2024-11-20 11:40:39.154980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:53.848 11:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:27:53.848 11:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:27:53.848 11:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:53.848 "params": { 00:27:53.848 "name": "Nvme1", 00:27:53.848 "trtype": "tcp", 00:27:53.848 "traddr": "10.0.0.3", 00:27:53.848 "adrfam": "ipv4", 00:27:53.848 "trsvcid": "4420", 00:27:53.848 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:53.848 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:53.848 "hdgst": false, 00:27:53.848 "ddgst": false 00:27:53.848 }, 00:27:53.848 "method": "bdev_nvme_attach_controller" 00:27:53.848 }' 00:27:53.848 2024/11/20 11:40:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:53.848 [2024-11-20 11:40:39.166899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:53.848 [2024-11-20 11:40:39.166930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:53.848 2024/11/20 11:40:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:53.848 [2024-11-20 11:40:39.178894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:53.848 [2024-11-20 11:40:39.178924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:53.848 2024/11/20 11:40:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:53.848 [2024-11-20 11:40:39.186889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:53.848 [2024-11-20 11:40:39.186919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:53.848 2024/11/20 11:40:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:53.848 [2024-11-20 11:40:39.194888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:53.848 [2024-11-20 11:40:39.194916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:53.848 2024/11/20 11:40:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:53.848 [2024-11-20 11:40:39.202888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:53.848 [2024-11-20 11:40:39.202916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:53.848 2024/11/20 11:40:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:53.848 [2024-11-20 11:40:39.209073] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:27:53.848 [2024-11-20 11:40:39.209174] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104481 ] 00:27:53.848 [2024-11-20 11:40:39.210887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:53.848 [2024-11-20 11:40:39.210915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:53.848 2024/11/20 11:40:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:53.848 [2024-11-20 11:40:39.218886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:53.848 [2024-11-20 11:40:39.218914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:53.848 2024/11/20 11:40:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:53.848 [2024-11-20 11:40:39.230951] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:53.848 [2024-11-20 11:40:39.231001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:53.848 2024/11/20 11:40:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:53.848 [2024-11-20 11:40:39.242919] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:53.848 [2024-11-20 11:40:39.242955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:53.848 2024/11/20 11:40:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:53.848 [2024-11-20 11:40:39.254901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:53.848 [2024-11-20 11:40:39.254929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:53.848 2024/11/20 11:40:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:53.848 [2024-11-20 11:40:39.266897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:53.848 [2024-11-20 11:40:39.266925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:53.848 2024/11/20 11:40:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:53.848 [2024-11-20 11:40:39.278896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:53.848 [2024-11-20 11:40:39.278924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:53.848 2024/11/20 11:40:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:53.848 [2024-11-20 11:40:39.290894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:53.848 [2024-11-20 11:40:39.290925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:53.848 2024/11/20 11:40:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:53.848 [2024-11-20 11:40:39.302893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:53.848 [2024-11-20 11:40:39.302920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:53.848 2024/11/20 11:40:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:53.848 [2024-11-20 11:40:39.314893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:53.848 [2024-11-20 11:40:39.314920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:53.848 2024/11/20 11:40:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:53.848 [2024-11-20 11:40:39.326894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:53.848 [2024-11-20 11:40:39.326923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:53.848 2024/11/20 11:40:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:53.848 [2024-11-20 11:40:39.342901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:53.848 [2024-11-20 11:40:39.342933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:53.848 2024/11/20 11:40:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:53.848 [2024-11-20 11:40:39.354904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:53.848 [2024-11-20 11:40:39.354936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:53.848 2024/11/20 11:40:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:53.848 [2024-11-20 11:40:39.359387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:53.848 [2024-11-20 11:40:39.366926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:53.848 [2024-11-20 11:40:39.366966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:53.848 2024/11/20 11:40:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:53.848 [2024-11-20 11:40:39.378919] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:53.848 [2024-11-20 11:40:39.378957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:53.848 2024/11/20 11:40:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:53.848 [2024-11-20 11:40:39.390919] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:53.848 [2024-11-20 11:40:39.390956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:53.849 2024/11/20 11:40:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:53.849 [2024-11-20 11:40:39.398886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:53.849 [2024-11-20 11:40:39.398914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:53.849 [2024-11-20 11:40:39.399313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:53.849 2024/11/20 11:40:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:53.849 [2024-11-20 11:40:39.410896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:53.849 [2024-11-20 11:40:39.410927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:53.849 2024/11/20 11:40:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:53.849 [2024-11-20 11:40:39.422936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:53.849 [2024-11-20 11:40:39.422977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:53.849 2024/11/20 11:40:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:54.109 [2024-11-20 11:40:39.434927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:54.109 [2024-11-20 11:40:39.434968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:54.109 2024/11/20 11:40:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:54.109 [2024-11-20 11:40:39.446921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:54.110 [2024-11-20 11:40:39.446961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:54.110 2024/11/20 11:40:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:54.110 [2024-11-20 11:40:39.458926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:54.110 [2024-11-20 11:40:39.458965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:54.110 2024/11/20 11:40:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:54.110 [2024-11-20 11:40:39.470924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:54.110 [2024-11-20 11:40:39.470967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:54.110 2024/11/20 11:40:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:54.110 [2024-11-20 11:40:39.482905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:54.110 [2024-11-20 11:40:39.482941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:54.110 2024/11/20 11:40:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:54.110 [2024-11-20 11:40:39.494945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:54.110 [2024-11-20 11:40:39.494981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:54.110 2024/11/20 11:40:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:54.110 [2024-11-20 11:40:39.506902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:54.110 [2024-11-20 11:40:39.506936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:54.110 2024/11/20 11:40:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:54.110 [2024-11-20 11:40:39.514899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:54.110 [2024-11-20 11:40:39.514933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:54.110 2024/11/20 11:40:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:54.110 [2024-11-20 11:40:39.526910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:54.110 [2024-11-20 11:40:39.526948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:54.110 2024/11/20 11:40:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:54.110 [2024-11-20 11:40:39.534895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:54.110 [2024-11-20 11:40:39.534928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:54.110 2024/11/20 11:40:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:54.110 Running I/O for 5 seconds... 00:27:54.110 [2024-11-20 11:40:39.549187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:54.110 [2024-11-20 11:40:39.549229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:54.110 2024/11/20 11:40:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:54.110 [2024-11-20 11:40:39.564832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:54.110 [2024-11-20 11:40:39.564869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:54.110 2024/11/20 11:40:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:54.110 [2024-11-20 11:40:39.583033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:54.110 [2024-11-20 11:40:39.583072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:54.110 2024/11/20 11:40:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:54.110 [2024-11-20 11:40:39.593655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:54.110 [2024-11-20 11:40:39.593695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:54.110 2024/11/20 11:40:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:54.110 [2024-11-20 11:40:39.606865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:54.110 [2024-11-20 11:40:39.606903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:54.110 2024/11/20 11:40:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:54.110 [2024-11-20 11:40:39.616577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:54.110 [2024-11-20 11:40:39.616625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:54.110 2024/11/20 11:40:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:54.110 [2024-11-20 11:40:39.632926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:54.110 [2024-11-20 11:40:39.632974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:54.110 2024/11/20 11:40:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:54.110 [2024-11-20 11:40:39.649564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:54.110 [2024-11-20 11:40:39.649611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:54.110 2024/11/20 11:40:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:54.110 [2024-11-20 11:40:39.665561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:54.110 [2024-11-20 11:40:39.665609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:54.110 2024/11/20 11:40:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:54.110 [2024-11-20 11:40:39.680841] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:54.110 [2024-11-20 11:40:39.680890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:54.110 2024/11/20 11:40:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:54.369 [2024-11-20 11:40:39.699068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:54.369 [2024-11-20 11:40:39.699107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:54.369 2024/11/20 11:40:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:54.369 [2024-11-20 11:40:39.710083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:54.369 [2024-11-20 11:40:39.710128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:54.369 2024/11/20 11:40:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:54.369 [2024-11-20 11:40:39.722897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:54.369 [2024-11-20 11:40:39.722935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:54.369 2024/11/20 11:40:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:54.369 [2024-11-20 11:40:39.733151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:54.369 [2024-11-20 11:40:39.733187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:54.369 2024/11/20 11:40:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:54.369 [2024-11-20 11:40:39.748732] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:54.369 [2024-11-20 11:40:39.748769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:54.369 2024/11/20 11:40:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:54.369 [2024-11-20 11:40:39.766821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:54.369 [2024-11-20 11:40:39.766857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:54.369 2024/11/20 11:40:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:54.369 [2024-11-20 11:40:39.777127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:54.369 [2024-11-20 11:40:39.777172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:54.369 2024/11/20 11:40:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:54.369 [2024-11-20 11:40:39.791474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:54.369 [2024-11-20 11:40:39.791511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:54.369 2024/11/20 11:40:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:54.369 [2024-11-20 11:40:39.811819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:54.369 [2024-11-20 11:40:39.811855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:54.370 2024/11/20 11:40:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:54.370 [2024-11-20 11:40:39.828587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:54.370 [2024-11-20 11:40:39.828639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:54.370 2024/11/20 11:40:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:54.370 [2024-11-20 11:40:39.846882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:54.370 [2024-11-20 11:40:39.846918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:54.370 2024/11/20 11:40:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:54.370 [2024-11-20 11:40:39.856449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:54.370 [2024-11-20 11:40:39.856486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:54.370 2024/11/20 11:40:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:54.370 [2024-11-20 11:40:39.871798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:54.370 [2024-11-20 11:40:39.871851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:54.370 2024/11/20 11:40:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:54.370 [2024-11-20 11:40:39.891203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:54.370 [2024-11-20 11:40:39.891241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:54.370 2024/11/20 11:40:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:54.370 [2024-11-20 11:40:39.901706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:54.370 [2024-11-20 11:40:39.901741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:54.370 2024/11/20 11:40:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:54.370 [2024-11-20 11:40:39.915483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:54.370 [2024-11-20 11:40:39.915521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:54.370 2024/11/20 11:40:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:54.370 [2024-11-20 11:40:39.925197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:54.370 [2024-11-20 11:40:39.925235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:54.370 2024/11/20 11:40:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:54.370 [2024-11-20 11:40:39.939417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:54.370 [2024-11-20 11:40:39.939458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:54.370 2024/11/20 11:40:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:54.629 [2024-11-20 11:40:39.959638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:54.629 [2024-11-20 11:40:39.959685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:54.629 2024/11/20 11:40:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:54.629 [2024-11-20 11:40:39.979750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:54.629 [2024-11-20 11:40:39.979801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:54.629 2024/11/20 11:40:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:54.629 [2024-11-20 11:40:39.998364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:54.629 [2024-11-20 11:40:39.998401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:54.629 2024/11/20 11:40:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:54.629 [2024-11-20 11:40:40.018976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:54.629 [2024-11-20 11:40:40.019018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:54.629 2024/11/20 11:40:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:54.629 [2024-11-20 11:40:40.030415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:54.629 [2024-11-20 11:40:40.030462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:54.629 2024/11/20 11:40:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:54.629 [2024-11-20 11:40:40.044651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:54.629 [2024-11-20 11:40:40.044711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:54.629 2024/11/20 11:40:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:54.629 [2024-11-20 11:40:40.060767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:54.629 [2024-11-20 11:40:40.060815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:54.629 2024/11/20 11:40:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:54.629 [2024-11-20 11:40:40.079075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:54.629 [2024-11-20 11:40:40.079112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:54.629 2024/11/20 11:40:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:54.629 [2024-11-20 11:40:40.089772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:54.629 [2024-11-20 11:40:40.089807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:54.629 2024/11/20 11:40:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:54.629 [2024-11-20 11:40:40.105769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:54.629 [2024-11-20 11:40:40.105807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:54.629 2024/11/20 11:40:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:54.629 [2024-11-20 11:40:40.121487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:54.630 [2024-11-20 11:40:40.121526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:54.630 2024/11/20 11:40:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:54.630 [2024-11-20 11:40:40.137312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:54.630 [2024-11-20 11:40:40.137349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:54.630 2024/11/20 11:40:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:54.630 [2024-11-20 11:40:40.153413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:54.630 [2024-11-20 11:40:40.153453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:54.630 2024/11/20 11:40:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:54.630 [2024-11-20 11:40:40.168442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:54.630 [2024-11-20 11:40:40.168479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:54.630 2024/11/20 11:40:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:54.630 [2024-11-20 11:40:40.186605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:54.630 [2024-11-20 11:40:40.186702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:54.630 2024/11/20 11:40:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:54.630 [2024-11-20 11:40:40.207843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:54.630 [2024-11-20 11:40:40.207889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:54.630 2024/11/20 11:40:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:54.889 [2024-11-20 11:40:40.227675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:54.889 [2024-11-20 11:40:40.227712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:54.889 2024/11/20 11:40:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:54.889 [2024-11-20 11:40:40.247378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:54.889 [2024-11-20 11:40:40.247417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:54.889 2024/11/20 11:40:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:54.889 [2024-11-20 11:40:40.267919] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:54.889 [2024-11-20 11:40:40.267964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:54.889 2024/11/20 11:40:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:54.889 [2024-11-20 11:40:40.285649] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:54.889 [2024-11-20 11:40:40.285690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:54.889 2024/11/20 11:40:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:54.889 [2024-11-20 11:40:40.300862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:54.889 [2024-11-20 11:40:40.300905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:54.889 2024/11/20 11:40:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:54.889 [2024-11-20 11:40:40.317238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:54.889 [2024-11-20 11:40:40.317280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:54.890 2024/11/20 11:40:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:54.890 [2024-11-20 11:40:40.332282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:54.890 [2024-11-20 11:40:40.332321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:54.890 2024/11/20 11:40:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:54.890 [2024-11-20 11:40:40.350839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:54.890 [2024-11-20 11:40:40.350878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:54.890 2024/11/20 11:40:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:54.890 [2024-11-20 11:40:40.361057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:54.890 [2024-11-20 11:40:40.361095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:54.890 2024/11/20 11:40:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:54.890 [2024-11-20 11:40:40.375848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:54.890 [2024-11-20 11:40:40.375886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:54.890 2024/11/20 11:40:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:54.890 [2024-11-20 11:40:40.385513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:54.890 [2024-11-20 11:40:40.385549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:54.890 2024/11/20 11:40:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:54.890 [2024-11-20 11:40:40.400939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:54.890 [2024-11-20 11:40:40.400983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:54.890 2024/11/20 11:40:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:54.890 [2024-11-20 11:40:40.416913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:54.890 [2024-11-20 11:40:40.416964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:54.890 2024/11/20 11:40:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:54.890 [2024-11-20 11:40:40.433026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:54.890 [2024-11-20 11:40:40.433074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:54.890 2024/11/20 11:40:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:54.890 [2024-11-20 11:40:40.449440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:54.890 [2024-11-20 11:40:40.449477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:54.890 2024/11/20 11:40:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:54.890 [2024-11-20 11:40:40.464576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:54.890 [2024-11-20 11:40:40.464627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:54.890 2024/11/20 11:40:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:55.150 [2024-11-20 11:40:40.483054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:55.150 [2024-11-20 11:40:40.483094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:55.150 2024/11/20 11:40:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:55.150 [2024-11-20 11:40:40.493281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:55.150 [2024-11-20 11:40:40.493332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:55.150 2024/11/20 11:40:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:55.150 [2024-11-20 11:40:40.507888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:55.150 [2024-11-20 11:40:40.507940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:55.150 2024/11/20 11:40:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:55.150 [2024-11-20 11:40:40.517853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:55.150 [2024-11-20 11:40:40.517901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:55.150 2024/11/20 11:40:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:55.150 [2024-11-20 11:40:40.530063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:55.150 [2024-11-20 11:40:40.530101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:55.150 2024/11/20 11:40:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:55.150 [2024-11-20 11:40:40.540861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:55.150 [2024-11-20 11:40:40.540896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:55.150 11356.00 IOPS, 88.72 MiB/s [2024-11-20T11:40:40.739Z] 2024/11/20 11:40:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:55.150 [2024-11-20 11:40:40.555572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:55.150 [2024-11-20 11:40:40.555625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:55.150 2024/11/20 11:40:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:55.150 [2024-11-20 11:40:40.565428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:55.150 [2024-11-20 11:40:40.565467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:55.150 2024/11/20 11:40:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:55.150 [2024-11-20 11:40:40.581858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:55.150 [2024-11-20 11:40:40.581912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:55.150 2024/11/20 11:40:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:55.150 [2024-11-20 11:40:40.592184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:55.150 [2024-11-20 11:40:40.592222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:55.150 2024/11/20 11:40:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:55.150 [2024-11-20 11:40:40.608651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:55.150 [2024-11-20 11:40:40.608692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:55.150 2024/11/20 11:40:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:55.150 [2024-11-20 11:40:40.624937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:55.150 [2024-11-20 11:40:40.624976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:55.150 2024/11/20 11:40:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:55.150 [2024-11-20 11:40:40.643207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:55.150 [2024-11-20 11:40:40.643269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:55.150 2024/11/20 11:40:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:55.150 [2024-11-20 11:40:40.654205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:55.150 [2024-11-20 11:40:40.654242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:55.150 2024/11/20 11:40:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:55.150 [2024-11-20 11:40:40.666115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:55.150 [2024-11-20 11:40:40.666151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:55.151 2024/11/20 11:40:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:55.151 [2024-11-20 11:40:40.680422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:55.151 [2024-11-20 11:40:40.680459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:55.151 2024/11/20 11:40:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:55.151 [2024-11-20 11:40:40.699317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:55.151 [2024-11-20 11:40:40.699367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:55.151 2024/11/20 11:40:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:55.151 [2024-11-20 11:40:40.715424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:55.151 [2024-11-20 11:40:40.715464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:55.151 2024/11/20 11:40:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:55.151 [2024-11-20 11:40:40.736324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:55.151 [2024-11-20 11:40:40.736376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:55.409 2024/11/20 11:40:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:55.409 [2024-11-20 11:40:40.749770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:55.409 [2024-11-20 11:40:40.749808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:55.410 2024/11/20 11:40:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:55.410 [2024-11-20 11:40:40.760213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:55.410 [2024-11-20 11:40:40.760252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:55.410 2024/11/20 11:40:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:55.410 [2024-11-20 11:40:40.776056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:55.410 [2024-11-20 11:40:40.776093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:55.410 2024/11/20 11:40:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:55.410 [2024-11-20 11:40:40.795557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:55.410 [2024-11-20 11:40:40.795640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:55.410 2024/11/20 11:40:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:55.410 [2024-11-20 11:40:40.813831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:55.410 [2024-11-20 11:40:40.813892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:55.410 2024/11/20 11:40:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:55.410 [2024-11-20 11:40:40.828507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:55.410 [2024-11-20 11:40:40.828549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:55.410 2024/11/20 11:40:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:55.410 [2024-11-20 11:40:40.847017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:55.410 [2024-11-20 11:40:40.847058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:55.410 2024/11/20 11:40:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:55.410 [2024-11-20 11:40:40.857664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:55.410 [2024-11-20 11:40:40.857701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:55.410 2024/11/20 11:40:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:55.410 [2024-11-20 11:40:40.878515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:55.410 [2024-11-20 11:40:40.878556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:55.410 2024/11/20 11:40:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:55.410 [2024-11-20 11:40:40.898635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:55.410 [2024-11-20 11:40:40.898675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:55.410 2024/11/20 11:40:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:55.410 [2024-11-20 11:40:40.909287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:55.410 [2024-11-20 11:40:40.909334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:55.410 2024/11/20 11:40:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:55.410 [2024-11-20 11:40:40.925555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:55.410 [2024-11-20 11:40:40.925594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:55.410 2024/11/20 11:40:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:55.410 [2024-11-20 11:40:40.940307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:55.410 [2024-11-20 11:40:40.940345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:55.410 2024/11/20 11:40:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:55.410 [2024-11-20 11:40:40.959866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:55.410 [2024-11-20 11:40:40.959906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:55.410 2024/11/20 11:40:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:55.410 [2024-11-20 11:40:40.978439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:55.410 [2024-11-20 11:40:40.978480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:55.410 2024/11/20 11:40:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:55.410 [2024-11-20 11:40:40.988161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:55.410 [2024-11-20 11:40:40.988212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:55.410 2024/11/20 11:40:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:55.669 [2024-11-20 11:40:41.005058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:55.669 [2024-11-20 11:40:41.005097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:55.669 2024/11/20 11:40:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:55.669 [2024-11-20 11:40:41.023103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:55.669 [2024-11-20 11:40:41.023142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:55.669 2024/11/20 11:40:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:55.669 [2024-11-20 11:40:41.033160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:55.669 [2024-11-20 11:40:41.033197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:55.669 2024/11/20 11:40:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:55.670 [2024-11-20 11:40:41.047687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:55.670 [2024-11-20 11:40:41.047742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:55.670 2024/11/20 11:40:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:55.670 [2024-11-20 11:40:41.066404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:55.670 [2024-11-20 11:40:41.066446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:55.670 2024/11/20 11:40:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:55.670 [2024-11-20 11:40:41.076928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:55.670 [2024-11-20 11:40:41.076964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:55.670 2024/11/20 11:40:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:55.670 [2024-11-20 11:40:41.090850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:55.670 [2024-11-20 11:40:41.090896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:55.670 2024/11/20 11:40:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:55.670 [2024-11-20 11:40:41.100461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:55.670 [2024-11-20 11:40:41.100499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:55.670 2024/11/20 11:40:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:55.670 [2024-11-20 11:40:41.115964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:55.670 [2024-11-20 11:40:41.116002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:55.670 2024/11/20 11:40:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:55.670 [2024-11-20 11:40:41.135504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:55.670 [2024-11-20 11:40:41.135547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:55.670 2024/11/20 11:40:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:55.670 [2024-11-20 11:40:41.154994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:55.670 [2024-11-20 11:40:41.155033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:55.670 2024/11/20 11:40:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:55.670 [2024-11-20 11:40:41.166333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:55.670 [2024-11-20 11:40:41.166397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:55.670 2024/11/20 11:40:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:55.670 [2024-11-20 11:40:41.180565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:55.670 [2024-11-20 11:40:41.180604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:55.670 2024/11/20 11:40:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:55.670 [2024-11-20 11:40:41.198958] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:55.670 [2024-11-20 11:40:41.198996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:55.670 2024/11/20 11:40:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:55.670 [2024-11-20 11:40:41.209885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:55.670 [2024-11-20 11:40:41.209920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:55.670 2024/11/20 11:40:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:55.670 [2024-11-20 11:40:41.220526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:55.670 [2024-11-20 11:40:41.220564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:55.670 2024/11/20 11:40:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:55.670 [2024-11-20 11:40:41.234447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:55.670 [2024-11-20 11:40:41.234485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:55.670 2024/11/20 11:40:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:55.670 [2024-11-20 11:40:41.244142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:55.670 [2024-11-20 11:40:41.244180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:55.670 2024/11/20 11:40:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:55.929 [2024-11-20 11:40:41.260518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:55.929 [2024-11-20 11:40:41.260562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:55.929 2024/11/20 11:40:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:55.929 [2024-11-20 11:40:41.278934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:55.929 [2024-11-20 11:40:41.278985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:55.929 2024/11/20 11:40:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:55.929 [2024-11-20 11:40:41.289141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:55.929 [2024-11-20 11:40:41.289179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:55.929 2024/11/20 11:40:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:55.929 [2024-11-20 11:40:41.302961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:55.929 [2024-11-20 11:40:41.302998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:55.929 2024/11/20 11:40:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:55.929 [2024-11-20 11:40:41.313151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:55.929 [2024-11-20 11:40:41.313188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:55.929 2024/11/20 11:40:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:55.929 [2024-11-20 11:40:41.327767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:55.929 [2024-11-20 11:40:41.327804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:55.929 2024/11/20 11:40:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:55.929 [2024-11-20 11:40:41.347699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:55.929 [2024-11-20 11:40:41.347737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:55.929 2024/11/20 11:40:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:55.929 [2024-11-20 11:40:41.367352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:55.929 [2024-11-20 11:40:41.367389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:55.929 2024/11/20 11:40:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:55.929 [2024-11-20 11:40:41.388466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:55.929 [2024-11-20 11:40:41.388505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:55.929 2024/11/20 11:40:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:55.929 [2024-11-20 11:40:41.405361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:55.929 [2024-11-20 11:40:41.405400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:55.929 2024/11/20 11:40:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:55.929 [2024-11-20 11:40:41.421578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:55.929 [2024-11-20 11:40:41.421630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:55.929 2024/11/20 11:40:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:55.929 [2024-11-20 11:40:41.436632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:55.929 [2024-11-20 11:40:41.436670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:55.929 2024/11/20 11:40:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:55.929 [2024-11-20 11:40:41.455048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:55.929 [2024-11-20 11:40:41.455098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:55.929 2024/11/20 11:40:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:55.929 [2024-11-20 11:40:41.464763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:55.929 [2024-11-20 11:40:41.464800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:55.929 2024/11/20 11:40:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:55.929 [2024-11-20 11:40:41.479386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:55.929 [2024-11-20 11:40:41.479426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:55.929 2024/11/20 11:40:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:55.929 [2024-11-20 11:40:41.500155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:55.929 [2024-11-20 11:40:41.500199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:55.929 2024/11/20 11:40:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:55.929 [2024-11-20 11:40:41.514714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:55.929 [2024-11-20 11:40:41.514771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:56.188 2024/11/20 11:40:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:56.188 [2024-11-20 11:40:41.524494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:56.188 [2024-11-20 11:40:41.524536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:56.188 2024/11/20 11:40:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:56.188 [2024-11-20 11:40:41.541244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:56.188 [2024-11-20 11:40:41.541295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:56.188 11341.00 IOPS, 88.60 MiB/s [2024-11-20T11:40:41.777Z] 2024/11/20 11:40:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:56.188 [2024-11-20 11:40:41.554424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:56.188 [2024-11-20 11:40:41.554471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:56.189 2024/11/20 11:40:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:56.189 [2024-11-20 11:40:41.575844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:56.189 [2024-11-20 11:40:41.575897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:56.189 2024/11/20 11:40:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:56.189 [2024-11-20 11:40:41.592700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:56.189 [2024-11-20 11:40:41.592740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:56.189 2024/11/20 11:40:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:56.189 [2024-11-20 11:40:41.611130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:56.189 [2024-11-20 11:40:41.611175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:56.189 2024/11/20 11:40:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:56.189 [2024-11-20 11:40:41.621423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:56.189 [2024-11-20 11:40:41.621476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:56.189 2024/11/20 11:40:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:56.189 [2024-11-20 11:40:41.636449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:56.189 [2024-11-20 11:40:41.636489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:56.189 2024/11/20 11:40:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:56.189 [2024-11-20 11:40:41.654846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:56.189 [2024-11-20 11:40:41.654891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:56.189 2024/11/20 11:40:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:56.189 [2024-11-20 11:40:41.665284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:56.189 [2024-11-20 11:40:41.665323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:56.189 2024/11/20 11:40:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:56.189 [2024-11-20 11:40:41.681155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:56.189 [2024-11-20 11:40:41.681194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:56.189 2024/11/20 11:40:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:56.189 [2024-11-20 11:40:41.694478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:56.189 [2024-11-20 11:40:41.694515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:56.189 2024/11/20 11:40:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:56.189 [2024-11-20 11:40:41.704730] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:56.189 [2024-11-20 11:40:41.704767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:56.189 2024/11/20 11:40:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:56.189 [2024-11-20 11:40:41.719441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:56.189 [2024-11-20 11:40:41.719480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:56.189 2024/11/20 11:40:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:56.189 [2024-11-20 11:40:41.739246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:56.189 [2024-11-20 11:40:41.739285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:56.189 2024/11/20 11:40:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:56.189 [2024-11-20 11:40:41.750516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:56.189 [2024-11-20 11:40:41.750556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:56.189 2024/11/20 11:40:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:56.189 [2024-11-20 11:40:41.767498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:56.189 [2024-11-20 11:40:41.767537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:56.189 2024/11/20 11:40:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:56.448 [2024-11-20 11:40:41.788142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:56.448 [2024-11-20 11:40:41.788193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:56.448 2024/11/20 11:40:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:56.448 [2024-11-20 11:40:41.805793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:56.448 [2024-11-20 11:40:41.805834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:56.448 2024/11/20 11:40:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:56.448 [2024-11-20 11:40:41.820552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:56.448 [2024-11-20 11:40:41.820592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:56.448 2024/11/20 11:40:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:56.448 [2024-11-20 11:40:41.837495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:56.448 [2024-11-20 11:40:41.837549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:56.448 2024/11/20 11:40:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:56.448 [2024-11-20 11:40:41.852244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:56.448 [2024-11-20 11:40:41.852294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:56.448 2024/11/20 11:40:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:56.448 [2024-11-20 11:40:41.870865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:56.448 [2024-11-20 11:40:41.870911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:56.448 2024/11/20 11:40:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:56.448 [2024-11-20 11:40:41.881783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:56.448 [2024-11-20 11:40:41.881840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:56.448 2024/11/20 11:40:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:56.448 [2024-11-20 11:40:41.897863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:56.448 [2024-11-20 11:40:41.897912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:56.448 2024/11/20 11:40:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:56.448 [2024-11-20 11:40:41.908161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:56.448 [2024-11-20 11:40:41.908205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:56.448 2024/11/20 11:40:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:56.448 [2024-11-20 11:40:41.924881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:56.448 [2024-11-20 11:40:41.924919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:56.449 2024/11/20 11:40:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:56.449 [2024-11-20 11:40:41.941343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:56.449 [2024-11-20 11:40:41.941410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:56.449 2024/11/20 11:40:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:56.449 [2024-11-20 11:40:41.964033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:56.449 [2024-11-20 11:40:41.964083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:56.449 2024/11/20 11:40:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:56.449 [2024-11-20 11:40:41.979088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:56.449 [2024-11-20 11:40:41.979129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:56.449 2024/11/20 11:40:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:56.449 [2024-11-20 11:40:41.989214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:56.449 [2024-11-20 11:40:41.989253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:56.449 2024/11/20 11:40:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:56.449 [2024-11-20 11:40:42.005070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:56.449 [2024-11-20 11:40:42.005109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:56.449 2024/11/20 11:40:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:56.449 [2024-11-20 11:40:42.021436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:56.449 [2024-11-20 11:40:42.021474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:56.449 2024/11/20 11:40:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:56.708 [2024-11-20 11:40:42.037793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:56.708 [2024-11-20 11:40:42.037832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:56.708 2024/11/20 11:40:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:56.708 [2024-11-20 11:40:42.050356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:56.708 [2024-11-20 11:40:42.050395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:56.708 2024/11/20 11:40:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:56.708 [2024-11-20 11:40:42.060412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:56.708 [2024-11-20 11:40:42.060466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:56.708 2024/11/20 11:40:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:56.708 [2024-11-20 11:40:42.082634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:56.708 [2024-11-20 11:40:42.082683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:56.708 2024/11/20 11:40:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:56.708 [2024-11-20 11:40:42.092900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:56.708 [2024-11-20 11:40:42.092937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:56.708 2024/11/20 11:40:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:56.708 [2024-11-20 11:40:42.107566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:56.708 [2024-11-20 11:40:42.107604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:56.708 2024/11/20 11:40:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:56.708 [2024-11-20 11:40:42.127429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:56.708 [2024-11-20 11:40:42.127475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:56.708 2024/11/20 11:40:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:56.708 [2024-11-20 11:40:42.147801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:56.708 [2024-11-20 11:40:42.147852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:56.709 2024/11/20 11:40:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:56.709 [2024-11-20 11:40:42.164742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:56.709 [2024-11-20 11:40:42.164785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:56.709 2024/11/20 11:40:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:56.709 [2024-11-20 11:40:42.187115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:56.709 [2024-11-20 11:40:42.187167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:56.709 2024/11/20 11:40:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:56.709 [2024-11-20 11:40:42.198328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:56.709 [2024-11-20 11:40:42.198366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:56.709 2024/11/20 11:40:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:56.709 [2024-11-20 11:40:42.211939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:56.709 [2024-11-20 11:40:42.211977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:56.709 2024/11/20 11:40:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:56.709 [2024-11-20 11:40:42.231833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:56.709 [2024-11-20 11:40:42.231872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:56.709 2024/11/20 11:40:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:56.709 [2024-11-20 11:40:42.252007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:56.709 [2024-11-20 11:40:42.252073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:56.709 2024/11/20 11:40:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:56.709 [2024-11-20 11:40:42.270672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:56.709 [2024-11-20 11:40:42.270710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:56.709 2024/11/20 11:40:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:56.709 [2024-11-20 11:40:42.280726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:56.709 [2024-11-20 11:40:42.280763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:56.709 2024/11/20 11:40:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:56.968 [2024-11-20 11:40:42.297297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:56.968 [2024-11-20 11:40:42.297335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:56.968 2024/11/20 11:40:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:56.968 [2024-11-20 11:40:42.313393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:56.968 [2024-11-20 11:40:42.313431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:56.968 2024/11/20 11:40:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:56.968 [2024-11-20 11:40:42.329675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:56.968 [2024-11-20 11:40:42.329713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:56.968 2024/11/20 11:40:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:56.968 [2024-11-20 11:40:42.344471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:56.968 [2024-11-20 11:40:42.344510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:56.968 2024/11/20 11:40:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:56.968 [2024-11-20 11:40:42.362859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:56.968 [2024-11-20 11:40:42.362898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:56.968 2024/11/20 11:40:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:56.968 [2024-11-20 11:40:42.373180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:56.968 [2024-11-20 11:40:42.373218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:56.968 2024/11/20 11:40:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:56.968 [2024-11-20 11:40:42.387418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:56.968 [2024-11-20 11:40:42.387458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:56.968 2024/11/20 11:40:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:56.968 [2024-11-20 11:40:42.406672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:56.968 [2024-11-20 11:40:42.406721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:56.968 2024/11/20 11:40:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:56.968 [2024-11-20 11:40:42.417033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:56.968 [2024-11-20 11:40:42.417071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:56.968 2024/11/20 11:40:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:56.968 [2024-11-20 11:40:42.431191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:56.968 [2024-11-20 11:40:42.431234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:56.968 2024/11/20 11:40:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:56.968 [2024-11-20 11:40:42.452204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:56.968 [2024-11-20 11:40:42.452276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:56.968 2024/11/20 11:40:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:56.968 [2024-11-20 11:40:42.468887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:56.968 [2024-11-20 11:40:42.468925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:56.968 2024/11/20 11:40:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:56.968 [2024-11-20 11:40:42.485416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:56.968 [2024-11-20 11:40:42.485456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:56.969 2024/11/20 11:40:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:56.969 [2024-11-20 11:40:42.501281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:56.969 [2024-11-20 11:40:42.501319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:56.969 2024/11/20 11:40:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:56.969 [2024-11-20 11:40:42.517466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:56.969 [2024-11-20 11:40:42.517506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:56.969 2024/11/20 11:40:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:56.969 [2024-11-20 11:40:42.527808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:56.969 [2024-11-20 11:40:42.527848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:56.969 2024/11/20 11:40:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:56.969 11283.67 IOPS, 88.15 MiB/s [2024-11-20T11:40:42.558Z] [2024-11-20 11:40:42.544108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:56.969 [2024-11-20 11:40:42.544147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:56.969 2024/11/20 11:40:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:57.228 [2024-11-20 11:40:42.563341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:57.228 [2024-11-20 11:40:42.563404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:57.228 2024/11/20 11:40:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:57.228 [2024-11-20 11:40:42.581360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:57.228 [2024-11-20 11:40:42.581400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:57.228 2024/11/20 11:40:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:57.228 [2024-11-20 11:40:42.595462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:57.228 [2024-11-20 11:40:42.595501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:57.228 2024/11/20 11:40:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:57.228 [2024-11-20 11:40:42.616151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:57.228 [2024-11-20 11:40:42.616190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:57.228 2024/11/20 11:40:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:57.228 [2024-11-20 11:40:42.634402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:57.228 [2024-11-20 11:40:42.634441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:57.228 2024/11/20 11:40:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:57.228 [2024-11-20 11:40:42.644283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:57.228 [2024-11-20 11:40:42.644322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:57.228 2024/11/20 11:40:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:57.228 [2024-11-20 11:40:42.660644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:57.228 [2024-11-20 11:40:42.660682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:57.228 2024/11/20 11:40:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:57.228 [2024-11-20 11:40:42.676881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:57.228 [2024-11-20 11:40:42.676927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:57.228 2024/11/20 11:40:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:57.228 [2024-11-20 11:40:42.693313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:57.228 [2024-11-20 11:40:42.693352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:57.228 2024/11/20 11:40:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:57.228 [2024-11-20 11:40:42.708401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:57.228 [2024-11-20 11:40:42.708438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:57.228 2024/11/20 11:40:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:57.228 [2024-11-20 11:40:42.727836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:57.228 [2024-11-20 11:40:42.727874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:57.228 2024/11/20 11:40:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:57.228 [2024-11-20 11:40:42.745796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:57.228 [2024-11-20 11:40:42.745835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:57.228 2024/11/20 11:40:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:57.228 [2024-11-20 11:40:42.759395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:57.228 [2024-11-20 11:40:42.759435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:57.228 2024/11/20 11:40:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:57.228 [2024-11-20 11:40:42.780264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:57.228 [2024-11-20 11:40:42.780303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:57.228 2024/11/20 11:40:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:57.228 [2024-11-20 11:40:42.794628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:57.228 [2024-11-20 11:40:42.794690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:57.228 2024/11/20 11:40:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:57.228 [2024-11-20 11:40:42.804674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:57.228 [2024-11-20 11:40:42.804714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:57.228 2024/11/20 11:40:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:57.487 [2024-11-20 11:40:42.819807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:57.487 [2024-11-20 11:40:42.819848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:57.487 2024/11/20 11:40:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:57.487 [2024-11-20 11:40:42.839762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:57.487 [2024-11-20 11:40:42.839800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:57.487 2024/11/20 11:40:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:57.487 [2024-11-20 11:40:42.855627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:57.487 [2024-11-20 11:40:42.855669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:57.487 2024/11/20 11:40:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:57.487 [2024-11-20 11:40:42.874734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:57.487 [2024-11-20 11:40:42.874781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:57.487 2024/11/20 11:40:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:57.487 [2024-11-20 11:40:42.884645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:57.487 [2024-11-20 11:40:42.884685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:57.487 2024/11/20 11:40:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:57.487 [2024-11-20 11:40:42.900068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:57.487 [2024-11-20 11:40:42.900110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:57.487 2024/11/20 11:40:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:57.487 [2024-11-20 11:40:42.919905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:57.487 [2024-11-20 11:40:42.919952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:57.487 2024/11/20 11:40:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:57.487 [2024-11-20 11:40:42.935126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:57.487 [2024-11-20 11:40:42.935177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:57.487 2024/11/20 11:40:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:57.487 [2024-11-20 11:40:42.955989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:57.487 [2024-11-20 11:40:42.956034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:57.487 2024/11/20 11:40:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:57.487 [2024-11-20 11:40:42.973021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:57.487 [2024-11-20 11:40:42.973070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:57.487 2024/11/20 11:40:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:57.488 [2024-11-20 11:40:42.989460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:57.488 [2024-11-20 11:40:42.989500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:57.488 2024/11/20 11:40:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:57.488 [2024-11-20 11:40:43.003425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:57.488 [2024-11-20 11:40:43.003467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:57.488 2024/11/20 11:40:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:57.488 [2024-11-20 11:40:43.023476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:57.488 [2024-11-20 11:40:43.023528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:57.488 2024/11/20 11:40:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:57.488 [2024-11-20 11:40:43.042629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:57.488 [2024-11-20 11:40:43.042669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:57.488 2024/11/20 11:40:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:57.488 [2024-11-20 11:40:43.053996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:57.488 [2024-11-20 11:40:43.054038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:57.488 2024/11/20 11:40:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:57.488 [2024-11-20 11:40:43.065246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:57.488 [2024-11-20 11:40:43.065285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:57.488 2024/11/20 11:40:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:57.748 [2024-11-20 11:40:43.080918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:57.748 [2024-11-20 11:40:43.080960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:57.748 2024/11/20 11:40:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:57.748 [2024-11-20 11:40:43.097503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:57.748 [2024-11-20 11:40:43.097553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:57.748 2024/11/20 11:40:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:57.748 [2024-11-20 11:40:43.113727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:57.748 [2024-11-20 11:40:43.113767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:57.748 2024/11/20 11:40:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:57.748 [2024-11-20 11:40:43.128313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:57.748 [2024-11-20 11:40:43.128352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:57.748 2024/11/20 11:40:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:57.748 [2024-11-20 11:40:43.147122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:57.748 [2024-11-20 11:40:43.147168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:57.748 2024/11/20 11:40:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:57.748 [2024-11-20 11:40:43.158506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:57.748 [2024-11-20 11:40:43.158547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:57.748 2024/11/20 11:40:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:57.748 [2024-11-20 11:40:43.172756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:57.748 [2024-11-20 11:40:43.172796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:57.748 2024/11/20 11:40:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:57.748 [2024-11-20 11:40:43.189246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:57.748 [2024-11-20 11:40:43.189293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:57.748 2024/11/20 11:40:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:57.748 [2024-11-20 11:40:43.204317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:57.748 [2024-11-20 11:40:43.204361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:57.748 2024/11/20 11:40:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:57.748 [2024-11-20 11:40:43.223191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:57.748 [2024-11-20 11:40:43.223236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:57.748 2024/11/20 11:40:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:57.748 [2024-11-20 11:40:43.233422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:57.748 [2024-11-20 11:40:43.233459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:57.748 2024/11/20 11:40:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:57.748 [2024-11-20 11:40:43.247799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:57.748 [2024-11-20 11:40:43.247836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:57.748 2024/11/20 11:40:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:57.748 [2024-11-20 11:40:43.267669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:57.748 [2024-11-20 11:40:43.267709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:57.748 2024/11/20 11:40:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:57.748 [2024-11-20 11:40:43.288136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:57.748 [2024-11-20 11:40:43.288185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:57.748 2024/11/20 11:40:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:57.748 [2024-11-20 11:40:43.303369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:57.748 [2024-11-20 11:40:43.303411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:57.748 2024/11/20 11:40:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:57.748 [2024-11-20 11:40:43.323988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:57.748 [2024-11-20 11:40:43.324033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:57.748 2024/11/20 11:40:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:58.007 [2024-11-20 11:40:43.339885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:58.007 [2024-11-20 11:40:43.339926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:58.007 2024/11/20 11:40:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:58.007 [2024-11-20 11:40:43.358561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:58.007 [2024-11-20 11:40:43.358601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:58.007 2024/11/20 11:40:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:58.007 [2024-11-20 11:40:43.368962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:58.007 [2024-11-20 11:40:43.368998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:58.007 2024/11/20 11:40:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:58.007 [2024-11-20 11:40:43.383292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:58.007 [2024-11-20 11:40:43.383329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:58.007 2024/11/20 11:40:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:58.007 [2024-11-20 11:40:43.404113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:58.007 [2024-11-20 11:40:43.404155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:58.007 2024/11/20 11:40:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:58.007 [2024-11-20 11:40:43.421013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:58.007 [2024-11-20 11:40:43.421056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:58.007 2024/11/20 11:40:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:58.007 [2024-11-20 11:40:43.439170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:58.007 [2024-11-20 11:40:43.439215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:58.007 2024/11/20 11:40:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:58.007 [2024-11-20 11:40:43.449141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:58.007 [2024-11-20 11:40:43.449179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:58.007 2024/11/20 11:40:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:58.007 [2024-11-20 11:40:43.464923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:58.008 [2024-11-20 11:40:43.464966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:58.008 2024/11/20 11:40:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:58.008 [2024-11-20 11:40:43.483159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:58.008 [2024-11-20 11:40:43.483222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:58.008 2024/11/20 11:40:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:58.008 [2024-11-20 11:40:43.492899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:58.008 [2024-11-20 11:40:43.492937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:58.008 2024/11/20 11:40:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:58.008 [2024-11-20 11:40:43.507580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:58.008 [2024-11-20 11:40:43.507631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:58.008 2024/11/20 11:40:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:58.008 [2024-11-20 11:40:43.526660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:58.008 [2024-11-20 11:40:43.526702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:58.008 2024/11/20 11:40:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:58.008 11265.00 IOPS, 88.01 MiB/s [2024-11-20T11:40:43.597Z] [2024-11-20 11:40:43.548123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:58.008 [2024-11-20 11:40:43.548169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:58.008 2024/11/20 11:40:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:58.008 [2024-11-20 11:40:43.564213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:58.008 [2024-11-20 11:40:43.564275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:58.008 2024/11/20 11:40:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:58.008 [2024-11-20 11:40:43.581141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:58.008 [2024-11-20 11:40:43.581187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:58.008 2024/11/20 11:40:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:58.267 [2024-11-20 11:40:43.596572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:58.267 [2024-11-20 11:40:43.596626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:58.267 2024/11/20 11:40:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:58.267 [2024-11-20 11:40:43.612932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:58.267 [2024-11-20 11:40:43.612971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:58.267 2024/11/20 11:40:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:58.267 [2024-11-20 11:40:43.631305] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:58.267 [2024-11-20 11:40:43.631367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:58.267 2024/11/20 11:40:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:58.267 [2024-11-20 11:40:43.649448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:58.267 [2024-11-20 11:40:43.649490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:58.267 2024/11/20 11:40:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:58.267 [2024-11-20 11:40:43.664194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:58.267 [2024-11-20 11:40:43.664232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:58.267 2024/11/20 11:40:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:58.267 [2024-11-20 11:40:43.684142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:58.267 [2024-11-20 11:40:43.684188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:58.267 2024/11/20 11:40:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:58.267 [2024-11-20 11:40:43.701081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:58.267 [2024-11-20 11:40:43.701121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:58.267 2024/11/20 11:40:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:58.267 [2024-11-20 11:40:43.714243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:58.267 [2024-11-20 11:40:43.714284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:58.267 2024/11/20 11:40:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:58.267 [2024-11-20 11:40:43.724100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:58.268 [2024-11-20 11:40:43.724136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:58.268 2024/11/20 11:40:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:58.268 [2024-11-20 11:40:43.740720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:58.268 [2024-11-20 11:40:43.740781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:58.268 2024/11/20 11:40:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:58.268 [2024-11-20 11:40:43.756814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:58.268 [2024-11-20 11:40:43.756855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:58.268 2024/11/20 11:40:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:58.268 [2024-11-20 11:40:43.773328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:58.268 [2024-11-20 11:40:43.773374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:58.268 2024/11/20 11:40:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:58.268 [2024-11-20 11:40:43.789584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:58.268 [2024-11-20 11:40:43.789639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:58.268 2024/11/20 11:40:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:58.268 [2024-11-20 11:40:43.805167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:58.268 [2024-11-20 11:40:43.805207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:58.268 2024/11/20 11:40:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:58.268 [2024-11-20 11:40:43.821368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:58.268 [2024-11-20 11:40:43.821411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:58.268 2024/11/20 11:40:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:58.268 [2024-11-20 11:40:43.837192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:58.268 [2024-11-20 11:40:43.837232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:58.268 2024/11/20 11:40:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:58.268 [2024-11-20 11:40:43.853039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:58.268 [2024-11-20 11:40:43.853091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:58.527 2024/11/20 11:40:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:58.527 [2024-11-20 11:40:43.868633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:58.527 [2024-11-20 11:40:43.868686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:58.527 2024/11/20 11:40:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:58.527 [2024-11-20 11:40:43.884864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:58.527 [2024-11-20 11:40:43.884903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:58.527 2024/11/20 11:40:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:58.527 [2024-11-20 11:40:43.901774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:58.527 [2024-11-20 11:40:43.901817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:58.527 2024/11/20 11:40:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:58.527 [2024-11-20 11:40:43.911989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:58.527 [2024-11-20 11:40:43.912026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:58.527 2024/11/20 11:40:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:58.527 [2024-11-20 11:40:43.927193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:58.527 [2024-11-20 11:40:43.927244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:58.527 2024/11/20 11:40:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:58.527 [2024-11-20 11:40:43.937329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:58.527 [2024-11-20 11:40:43.937389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:58.527 2024/11/20 11:40:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:58.527 [2024-11-20 11:40:43.953779] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:58.527 [2024-11-20 11:40:43.953819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:58.527 2024/11/20 11:40:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:58.527 [2024-11-20 11:40:43.968245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:58.527 [2024-11-20 11:40:43.968285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:58.528 2024/11/20 11:40:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:58.528 [2024-11-20 11:40:43.987881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:58.528 [2024-11-20 11:40:43.987926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:58.528 2024/11/20 11:40:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:58.528 [2024-11-20 11:40:44.005894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:58.528 [2024-11-20 11:40:44.005930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:58.528 2024/11/20 11:40:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:58.528 [2024-11-20 11:40:44.015937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:58.528 [2024-11-20 11:40:44.015975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:58.528 2024/11/20 11:40:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:58.528 [2024-11-20 11:40:44.032453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:58.528 [2024-11-20 11:40:44.032502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:58.528 2024/11/20 11:40:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:58.528 [2024-11-20 11:40:44.050677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:58.528 [2024-11-20 11:40:44.050715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:58.528 2024/11/20 11:40:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:58.528 [2024-11-20 11:40:44.062134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:58.528 [2024-11-20 11:40:44.062182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:58.528 2024/11/20 11:40:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:58.528 [2024-11-20 11:40:44.075059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:58.528 [2024-11-20 11:40:44.075118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:58.528 2024/11/20 11:40:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:58.528 [2024-11-20 11:40:44.087457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:58.528 [2024-11-20 11:40:44.087505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:58.528 2024/11/20 11:40:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:58.528 [2024-11-20 11:40:44.104274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:58.528 [2024-11-20 11:40:44.104315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:58.528 2024/11/20 11:40:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:58.787 [2024-11-20 11:40:44.123697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:58.787 [2024-11-20 11:40:44.123745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:58.787 2024/11/20 11:40:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:58.787 [2024-11-20 11:40:44.141767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:58.787 [2024-11-20 11:40:44.141820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:58.787 2024/11/20 11:40:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:58.787 [2024-11-20 11:40:44.156448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:58.787 [2024-11-20 11:40:44.156493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:58.787 2024/11/20 11:40:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:58.787 [2024-11-20 11:40:44.174222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:58.787 [2024-11-20 11:40:44.174260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:58.787 2024/11/20 11:40:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:58.787 [2024-11-20 11:40:44.195838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:58.787 [2024-11-20 11:40:44.195884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:58.787 2024/11/20 11:40:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:58.787 [2024-11-20 11:40:44.211691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:58.787 [2024-11-20 11:40:44.211733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:58.787 2024/11/20 11:40:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:58.787 [2024-11-20 11:40:44.231127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:58.787 [2024-11-20 11:40:44.231183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:58.787 2024/11/20 11:40:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:58.787 [2024-11-20 11:40:44.241131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:58.787 [2024-11-20 11:40:44.241186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:58.787 2024/11/20 11:40:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:58.787 [2024-11-20 11:40:44.256921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:58.787 [2024-11-20 11:40:44.256961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:58.787 2024/11/20 11:40:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:58.787 [2024-11-20 11:40:44.275154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:58.787 [2024-11-20 11:40:44.275192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:58.787 2024/11/20 11:40:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:58.787 [2024-11-20 11:40:44.285217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:58.787 [2024-11-20 11:40:44.285254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:58.787 2024/11/20 11:40:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:58.787 [2024-11-20 11:40:44.299788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:58.787 [2024-11-20 11:40:44.299825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:58.787 2024/11/20 11:40:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:58.787 [2024-11-20 11:40:44.319225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:58.787 [2024-11-20 11:40:44.319261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:58.787 2024/11/20 11:40:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:58.787 [2024-11-20 11:40:44.330858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:58.787 [2024-11-20 11:40:44.330903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:58.787 2024/11/20 11:40:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:58.787 [2024-11-20 11:40:44.342935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:58.787 [2024-11-20 11:40:44.342973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:58.787 2024/11/20 11:40:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:58.787 [2024-11-20 11:40:44.355568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:58.787 [2024-11-20 11:40:44.355607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:58.787 2024/11/20 11:40:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:58.787 [2024-11-20 11:40:44.371854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:58.787 [2024-11-20 11:40:44.371892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:59.047 2024/11/20 11:40:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:59.047 [2024-11-20 11:40:44.391343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:59.047 [2024-11-20 11:40:44.391382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:59.047 2024/11/20 11:40:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:59.047 [2024-11-20 11:40:44.411686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:59.047 [2024-11-20 11:40:44.411731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:59.047 2024/11/20 11:40:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:59.047 [2024-11-20 11:40:44.429481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:59.047 [2024-11-20 11:40:44.429526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:59.047 2024/11/20 11:40:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:59.047 [2024-11-20 11:40:44.444539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:59.047 [2024-11-20 11:40:44.444578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:59.047 2024/11/20 11:40:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:59.047 [2024-11-20 11:40:44.462604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:59.047 [2024-11-20 11:40:44.462656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:59.047 2024/11/20 11:40:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:59.047 [2024-11-20 11:40:44.472841] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:59.047 [2024-11-20 11:40:44.472887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:59.047 2024/11/20 11:40:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:59.047 [2024-11-20 11:40:44.487096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:59.047 [2024-11-20 11:40:44.487153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:59.047 2024/11/20 11:40:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:59.047 [2024-11-20 11:40:44.497205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:59.047 [2024-11-20 11:40:44.497241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:59.047 2024/11/20 11:40:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:59.047 [2024-11-20 11:40:44.511759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:59.047 [2024-11-20 11:40:44.511803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:59.047 2024/11/20 11:40:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:59.047 [2024-11-20 11:40:44.531470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:59.047 [2024-11-20 11:40:44.531514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:59.047 2024/11/20 11:40:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:59.047 11257.40 IOPS, 87.95 MiB/s [2024-11-20T11:40:44.636Z] [2024-11-20 11:40:44.550562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:59.047 [2024-11-20 11:40:44.550605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:59.047 2024/11/20 11:40:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:59.047 00:27:59.047 Latency(us) 00:27:59.047 [2024-11-20T11:40:44.636Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:59.047 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:27:59.047 Nvme1n1 : 5.01 11256.36 87.94 0.00 0.00 11355.95 2829.96 18707.55 00:27:59.047 [2024-11-20T11:40:44.636Z] =================================================================================================================== 00:27:59.047 [2024-11-20T11:40:44.636Z] Total : 11256.36 87.94 0.00 0.00 11355.95 2829.96 18707.55 00:27:59.047 [2024-11-20 11:40:44.558920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:59.047 [2024-11-20 11:40:44.558955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:59.047 2024/11/20 11:40:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:59.047 [2024-11-20 11:40:44.570948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:59.047 [2024-11-20 11:40:44.571000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:59.047 2024/11/20 11:40:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:59.047 [2024-11-20 11:40:44.582942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:59.047 [2024-11-20 11:40:44.582990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:59.047 2024/11/20 11:40:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:59.047 [2024-11-20 11:40:44.594958] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:59.047 [2024-11-20 11:40:44.595010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:59.047 2024/11/20 11:40:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:59.047 [2024-11-20 11:40:44.606948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:59.047 [2024-11-20 11:40:44.606996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:59.047 2024/11/20 11:40:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:59.047 [2024-11-20 11:40:44.618940] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:59.047 [2024-11-20 11:40:44.618985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:59.047 2024/11/20 11:40:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:59.047 [2024-11-20 11:40:44.630939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:59.047 [2024-11-20 11:40:44.630982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:59.307 2024/11/20 11:40:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:59.307 [2024-11-20 11:40:44.642910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:59.307 [2024-11-20 11:40:44.642943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:59.307 2024/11/20 11:40:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:59.307 [2024-11-20 11:40:44.654950] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:59.307 [2024-11-20 11:40:44.655002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:59.307 2024/11/20 11:40:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:59.307 [2024-11-20 11:40:44.666945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:59.307 [2024-11-20 11:40:44.666991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:59.307 2024/11/20 11:40:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:59.307 [2024-11-20 11:40:44.678920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:59.307 [2024-11-20 11:40:44.678954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:59.307 2024/11/20 11:40:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:59.307 [2024-11-20 11:40:44.690903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:59.307 [2024-11-20 11:40:44.690934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:59.307 2024/11/20 11:40:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:59.307 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (104481) - No such process 00:27:59.307 11:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 104481 00:27:59.307 11:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:59.307 11:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.307 11:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:27:59.307 11:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.307 11:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:27:59.307 11:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.307 11:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:27:59.307 delay0 00:27:59.307 11:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.307 11:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:27:59.307 11:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.307 11:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:27:59.307 11:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.307 11:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:27:59.565 [2024-11-20 11:40:44.900180] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:28:07.681 Initializing NVMe Controllers 00:28:07.681 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:28:07.681 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:07.681 Initialization complete. Launching workers. 00:28:07.681 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 247, failed: 22407 00:28:07.681 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 22528, failed to submit 126 00:28:07.681 success 22445, unsuccessful 83, failed 0 00:28:07.681 11:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:28:07.681 11:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:28:07.681 11:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:07.682 11:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:28:07.682 11:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:07.682 11:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:28:07.682 11:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:07.682 11:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:07.682 rmmod nvme_tcp 00:28:07.682 rmmod nvme_fabrics 00:28:07.682 rmmod nvme_keyring 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 104342 ']' 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 104342 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 104342 ']' 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 104342 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104342 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:07.682 killing process with pid 104342 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104342' 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 104342 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 104342 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:28:07.682 00:28:07.682 real 0m24.945s 00:28:07.682 user 0m38.833s 00:28:07.682 sys 0m7.798s 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:07.682 ************************************ 00:28:07.682 END TEST nvmf_zcopy 00:28:07.682 ************************************ 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:07.682 ************************************ 00:28:07.682 START TEST nvmf_nmic 00:28:07.682 ************************************ 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:28:07.682 * Looking for test storage... 00:28:07.682 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:07.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:07.682 --rc genhtml_branch_coverage=1 00:28:07.682 --rc genhtml_function_coverage=1 00:28:07.682 --rc genhtml_legend=1 00:28:07.682 --rc geninfo_all_blocks=1 00:28:07.682 --rc geninfo_unexecuted_blocks=1 00:28:07.682 00:28:07.682 ' 00:28:07.682 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:07.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:07.682 --rc genhtml_branch_coverage=1 00:28:07.682 --rc genhtml_function_coverage=1 00:28:07.682 --rc genhtml_legend=1 00:28:07.682 --rc geninfo_all_blocks=1 00:28:07.682 --rc geninfo_unexecuted_blocks=1 00:28:07.682 00:28:07.682 ' 00:28:07.683 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:07.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:07.683 --rc genhtml_branch_coverage=1 00:28:07.683 --rc genhtml_function_coverage=1 00:28:07.683 --rc genhtml_legend=1 00:28:07.683 --rc geninfo_all_blocks=1 00:28:07.683 --rc geninfo_unexecuted_blocks=1 00:28:07.683 00:28:07.683 ' 00:28:07.683 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:07.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:07.683 --rc genhtml_branch_coverage=1 00:28:07.683 --rc genhtml_function_coverage=1 00:28:07.683 --rc genhtml_legend=1 00:28:07.683 --rc geninfo_all_blocks=1 00:28:07.683 --rc geninfo_unexecuted_blocks=1 00:28:07.683 00:28:07.683 ' 00:28:07.683 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:07.683 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:28:07.683 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:07.683 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:07.683 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:07.683 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:07.683 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:07.683 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:07.683 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:07.683 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:07.683 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:07.683 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:07.683 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:28:07.683 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=806d442c-08ba-4719-b76d-33169283459a 00:28:07.683 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:07.683 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:07.683 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:07.683 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:07.683 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:07.683 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:28:07.683 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:07.683 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:07.683 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:07.683 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.683 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.683 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.683 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:28:07.683 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.683 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:28:07.683 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:07.683 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:07.683 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:07.683 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:07.683 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:07.683 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:07.683 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:07.683 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:07.683 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:07.683 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:07.683 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:07.683 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:07.683 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:28:07.683 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:07.683 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:07.683 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:07.683 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:07.683 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:07.683 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:07.683 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:07.683 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:07.683 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:28:07.683 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:28:07.683 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:28:07.683 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:28:07.683 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:28:07.683 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:28:07.683 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:07.683 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:28:07.683 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:28:07.683 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:28:07.683 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:07.683 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:28:07.683 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:07.683 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:28:07.683 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:07.683 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:28:07.683 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:07.683 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:07.683 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:07.683 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:07.683 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:07.683 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:07.683 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:28:07.683 Cannot find device "nvmf_init_br" 00:28:07.683 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:28:07.683 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:28:07.683 Cannot find device "nvmf_init_br2" 00:28:07.684 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:28:07.684 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:28:07.684 Cannot find device "nvmf_tgt_br" 00:28:07.684 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:28:07.684 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:28:07.684 Cannot find device "nvmf_tgt_br2" 00:28:07.684 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:28:07.684 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:28:07.684 Cannot find device "nvmf_init_br" 00:28:07.684 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:28:07.684 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:28:07.684 Cannot find device "nvmf_init_br2" 00:28:07.684 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:28:07.684 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:28:07.684 Cannot find device "nvmf_tgt_br" 00:28:07.684 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:28:07.684 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:28:07.684 Cannot find device "nvmf_tgt_br2" 00:28:07.684 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:28:07.684 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:28:07.684 Cannot find device "nvmf_br" 00:28:07.684 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:28:07.684 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:28:07.684 Cannot find device "nvmf_init_if" 00:28:07.684 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:28:07.684 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:28:07.684 Cannot find device "nvmf_init_if2" 00:28:07.684 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:28:07.684 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:07.684 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:07.684 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:28:07.684 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:07.684 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:07.684 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:28:07.684 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:28:07.684 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:07.684 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:28:07.684 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:07.684 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:07.684 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:07.684 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:07.684 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:07.684 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:28:07.684 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:28:07.684 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:28:07.684 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:28:07.684 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:28:07.684 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:28:07.684 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:28:07.684 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:28:07.684 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:28:07.684 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:07.684 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:07.684 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:07.684 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:28:07.684 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:28:07.684 11:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:28:07.684 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:28:07.684 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:07.684 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:07.684 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:07.684 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:28:07.684 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:28:07.684 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:28:07.684 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:07.684 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:28:07.684 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:28:07.684 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:07.684 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:28:07.684 00:28:07.684 --- 10.0.0.3 ping statistics --- 00:28:07.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:07.684 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:28:07.684 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:28:07.684 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:28:07.684 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:28:07.684 00:28:07.684 --- 10.0.0.4 ping statistics --- 00:28:07.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:07.684 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:28:07.684 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:07.684 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:07.684 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:28:07.684 00:28:07.684 --- 10.0.0.1 ping statistics --- 00:28:07.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:07.684 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:28:07.684 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:28:07.684 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:07.684 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:28:07.684 00:28:07.684 --- 10.0.0.2 ping statistics --- 00:28:07.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:07.684 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:28:07.684 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:07.684 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:28:07.684 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:07.684 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:07.684 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:07.684 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:07.684 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:07.684 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:07.684 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:07.684 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:28:07.684 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:07.684 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:07.684 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:28:07.684 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=104865 00:28:07.684 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 104865 00:28:07.684 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:28:07.684 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 104865 ']' 00:28:07.684 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:07.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:07.684 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:07.684 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:07.685 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:07.685 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:28:07.685 [2024-11-20 11:40:53.188467] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:07.685 [2024-11-20 11:40:53.189739] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:28:07.685 [2024-11-20 11:40:53.189805] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:07.943 [2024-11-20 11:40:53.340374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:07.943 [2024-11-20 11:40:53.382041] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:07.943 [2024-11-20 11:40:53.382120] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:07.943 [2024-11-20 11:40:53.382134] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:07.943 [2024-11-20 11:40:53.382145] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:07.943 [2024-11-20 11:40:53.382154] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:07.943 [2024-11-20 11:40:53.383122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:07.943 [2024-11-20 11:40:53.383190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:07.943 [2024-11-20 11:40:53.383277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:07.943 [2024-11-20 11:40:53.383286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:07.943 [2024-11-20 11:40:53.440869] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:07.943 [2024-11-20 11:40:53.440887] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:07.943 [2024-11-20 11:40:53.441588] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:07.943 [2024-11-20 11:40:53.441777] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:28:07.943 [2024-11-20 11:40:53.441837] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:07.943 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:07.943 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:28:07.943 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:07.943 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:07.943 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:28:07.943 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:07.943 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:07.943 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.943 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:28:07.943 [2024-11-20 11:40:53.524599] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:08.202 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.202 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:08.202 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.202 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:28:08.202 Malloc0 00:28:08.202 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.202 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:28:08.202 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.202 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:28:08.202 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.202 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:08.202 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.202 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:28:08.202 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.202 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:28:08.202 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.202 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:28:08.202 [2024-11-20 11:40:53.596818] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:28:08.202 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.202 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:28:08.202 test case1: single bdev can't be used in multiple subsystems 00:28:08.202 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:28:08.202 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.202 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:28:08.202 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.202 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:28:08.202 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.202 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:28:08.202 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.202 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:28:08.202 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:28:08.202 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.202 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:28:08.202 [2024-11-20 11:40:53.624462] bdev.c:8193:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:28:08.202 [2024-11-20 11:40:53.624514] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:28:08.202 [2024-11-20 11:40:53.624529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:08.202 2024/11/20 11:40:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:08.202 request: 00:28:08.202 { 00:28:08.202 "method": "nvmf_subsystem_add_ns", 00:28:08.202 "params": { 00:28:08.202 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:28:08.202 "namespace": { 00:28:08.202 "bdev_name": "Malloc0", 00:28:08.202 "no_auto_visible": false 00:28:08.202 } 00:28:08.202 } 00:28:08.202 } 00:28:08.202 Got JSON-RPC error response 00:28:08.202 GoRPCClient: error on JSON-RPC call 00:28:08.202 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:08.202 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:28:08.202 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:28:08.202 Adding namespace failed - expected result. 00:28:08.202 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:28:08.202 test case2: host connect to nvmf target in multiple paths 00:28:08.202 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:28:08.202 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:28:08.202 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.202 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:28:08.202 [2024-11-20 11:40:53.640636] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:28:08.202 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.202 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid=806d442c-08ba-4719-b76d-33169283459a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:28:08.202 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid=806d442c-08ba-4719-b76d-33169283459a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:28:08.460 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:28:08.460 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:28:08.460 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:28:08.460 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:28:08.460 11:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:28:10.434 11:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:28:10.434 11:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:28:10.434 11:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:28:10.434 11:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:28:10.434 11:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:28:10.434 11:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:28:10.434 11:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:28:10.434 [global] 00:28:10.434 thread=1 00:28:10.434 invalidate=1 00:28:10.434 rw=write 00:28:10.434 time_based=1 00:28:10.434 runtime=1 00:28:10.434 ioengine=libaio 00:28:10.434 direct=1 00:28:10.434 bs=4096 00:28:10.434 iodepth=1 00:28:10.434 norandommap=0 00:28:10.434 numjobs=1 00:28:10.434 00:28:10.434 verify_dump=1 00:28:10.434 verify_backlog=512 00:28:10.434 verify_state_save=0 00:28:10.434 do_verify=1 00:28:10.434 verify=crc32c-intel 00:28:10.434 [job0] 00:28:10.434 filename=/dev/nvme0n1 00:28:10.434 Could not set queue depth (nvme0n1) 00:28:10.434 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:28:10.434 fio-3.35 00:28:10.434 Starting 1 thread 00:28:11.806 00:28:11.806 job0: (groupid=0, jobs=1): err= 0: pid=104956: Wed Nov 20 11:40:57 2024 00:28:11.806 read: IOPS=2829, BW=11.1MiB/s (11.6MB/s)(11.1MiB/1001msec) 00:28:11.806 slat (nsec): min=12310, max=51140, avg=16336.68, stdev=3240.72 00:28:11.806 clat (usec): min=152, max=2189, avg=175.24, stdev=43.09 00:28:11.806 lat (usec): min=167, max=2205, avg=191.58, stdev=43.23 00:28:11.806 clat percentiles (usec): 00:28:11.806 | 1.00th=[ 159], 5.00th=[ 161], 10.00th=[ 163], 20.00th=[ 165], 00:28:11.806 | 30.00th=[ 167], 40.00th=[ 169], 50.00th=[ 172], 60.00th=[ 174], 00:28:11.806 | 70.00th=[ 178], 80.00th=[ 182], 90.00th=[ 186], 95.00th=[ 194], 00:28:11.806 | 99.00th=[ 253], 99.50th=[ 273], 99.90th=[ 486], 99.95th=[ 685], 00:28:11.806 | 99.99th=[ 2180] 00:28:11.806 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:28:11.806 slat (usec): min=15, max=129, avg=23.96, stdev= 6.52 00:28:11.806 clat (usec): min=104, max=716, avg=121.62, stdev=16.80 00:28:11.806 lat (usec): min=126, max=750, avg=145.57, stdev=18.75 00:28:11.806 clat percentiles (usec): 00:28:11.806 | 1.00th=[ 110], 5.00th=[ 112], 10.00th=[ 114], 20.00th=[ 115], 00:28:11.806 | 30.00th=[ 117], 40.00th=[ 118], 50.00th=[ 119], 60.00th=[ 121], 00:28:11.806 | 70.00th=[ 123], 80.00th=[ 128], 90.00th=[ 133], 95.00th=[ 139], 00:28:11.806 | 99.00th=[ 165], 99.50th=[ 186], 99.90th=[ 314], 99.95th=[ 343], 00:28:11.806 | 99.99th=[ 717] 00:28:11.806 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:28:11.806 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:28:11.806 lat (usec) : 250=99.32%, 500=0.63%, 750=0.03% 00:28:11.806 lat (msec) : 4=0.02% 00:28:11.806 cpu : usr=2.70%, sys=8.60%, ctx=5904, majf=0, minf=5 00:28:11.806 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:11.806 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:11.806 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:11.806 issued rwts: total=2832,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:11.806 latency : target=0, window=0, percentile=100.00%, depth=1 00:28:11.806 00:28:11.806 Run status group 0 (all jobs): 00:28:11.806 READ: bw=11.1MiB/s (11.6MB/s), 11.1MiB/s-11.1MiB/s (11.6MB/s-11.6MB/s), io=11.1MiB (11.6MB), run=1001-1001msec 00:28:11.806 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:28:11.806 00:28:11.806 Disk stats (read/write): 00:28:11.806 nvme0n1: ios=2610/2724, merge=0/0, ticks=475/365, in_queue=840, util=91.18% 00:28:11.806 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:28:11.806 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:28:11.806 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:28:11.806 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:28:11.806 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:11.806 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:28:11.806 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:28:11.806 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:11.806 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:28:11.806 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:28:11.806 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:28:11.806 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:11.806 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:28:11.806 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:11.806 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:28:11.806 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:11.806 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:11.806 rmmod nvme_tcp 00:28:11.806 rmmod nvme_fabrics 00:28:11.806 rmmod nvme_keyring 00:28:12.064 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:12.064 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:28:12.064 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:28:12.064 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 104865 ']' 00:28:12.064 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 104865 00:28:12.064 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 104865 ']' 00:28:12.064 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 104865 00:28:12.064 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:28:12.064 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:12.064 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104865 00:28:12.064 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:12.064 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:12.064 killing process with pid 104865 00:28:12.064 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104865' 00:28:12.064 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 104865 00:28:12.064 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 104865 00:28:12.064 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:12.064 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:12.064 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:12.064 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:28:12.065 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:28:12.065 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:12.065 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:28:12.065 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:12.065 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:28:12.065 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:28:12.065 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:28:12.065 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:28:12.323 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:28:12.323 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:28:12.323 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:28:12.323 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:28:12.323 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:28:12.323 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:28:12.323 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:28:12.323 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:28:12.323 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:12.323 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:12.323 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:28:12.323 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:12.323 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:12.323 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:12.323 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:28:12.323 00:28:12.323 real 0m5.375s 00:28:12.323 user 0m14.791s 00:28:12.323 sys 0m2.229s 00:28:12.323 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:12.323 ************************************ 00:28:12.323 END TEST nvmf_nmic 00:28:12.323 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:28:12.323 ************************************ 00:28:12.323 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:28:12.323 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:12.323 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:12.323 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:12.323 ************************************ 00:28:12.323 START TEST nvmf_fio_target 00:28:12.323 ************************************ 00:28:12.323 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:28:12.583 * Looking for test storage... 00:28:12.583 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:28:12.583 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:12.583 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:28:12.583 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:12.583 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:12.583 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:12.583 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:12.583 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:12.583 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:28:12.583 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:28:12.583 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:28:12.583 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:28:12.583 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:28:12.583 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:28:12.583 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:28:12.583 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:12.583 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:28:12.583 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:28:12.583 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:12.583 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:12.583 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:28:12.583 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:28:12.583 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:12.583 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:28:12.583 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:28:12.583 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:28:12.583 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:28:12.583 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:12.583 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:28:12.583 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:28:12.583 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:12.583 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:12.583 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:28:12.583 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:12.583 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:12.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:12.583 --rc genhtml_branch_coverage=1 00:28:12.583 --rc genhtml_function_coverage=1 00:28:12.583 --rc genhtml_legend=1 00:28:12.583 --rc geninfo_all_blocks=1 00:28:12.583 --rc geninfo_unexecuted_blocks=1 00:28:12.583 00:28:12.583 ' 00:28:12.583 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:12.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:12.583 --rc genhtml_branch_coverage=1 00:28:12.583 --rc genhtml_function_coverage=1 00:28:12.583 --rc genhtml_legend=1 00:28:12.583 --rc geninfo_all_blocks=1 00:28:12.583 --rc geninfo_unexecuted_blocks=1 00:28:12.583 00:28:12.583 ' 00:28:12.583 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:12.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:12.583 --rc genhtml_branch_coverage=1 00:28:12.583 --rc genhtml_function_coverage=1 00:28:12.583 --rc genhtml_legend=1 00:28:12.583 --rc geninfo_all_blocks=1 00:28:12.583 --rc geninfo_unexecuted_blocks=1 00:28:12.583 00:28:12.583 ' 00:28:12.583 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:12.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:12.583 --rc genhtml_branch_coverage=1 00:28:12.583 --rc genhtml_function_coverage=1 00:28:12.583 --rc genhtml_legend=1 00:28:12.583 --rc geninfo_all_blocks=1 00:28:12.583 --rc geninfo_unexecuted_blocks=1 00:28:12.583 00:28:12.583 ' 00:28:12.583 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:12.583 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:28:12.583 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:12.583 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:12.583 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:12.583 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:12.583 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:12.583 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:12.583 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:12.583 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:12.583 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:12.583 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:12.583 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:28:12.583 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=806d442c-08ba-4719-b76d-33169283459a 00:28:12.583 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:12.583 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:12.583 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:12.583 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:12.583 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:12.583 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:28:12.583 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:12.583 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:12.583 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:12.583 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.583 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.583 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.583 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:28:12.583 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.583 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:28:12.583 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:12.583 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:12.584 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:12.584 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:12.584 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:12.584 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:12.584 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:12.584 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:12.584 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:12.584 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:12.584 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:12.584 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:12.584 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:12.584 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:28:12.584 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:12.584 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:12.584 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:12.584 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:12.584 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:12.584 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:12.584 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:12.584 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:12.584 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:28:12.584 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:28:12.584 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:28:12.584 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:28:12.584 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:28:12.584 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:28:12.584 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:12.584 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:28:12.584 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:28:12.584 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:28:12.584 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:12.584 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:28:12.584 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:12.584 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:28:12.584 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:12.584 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:28:12.584 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:12.584 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:12.584 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:12.584 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:12.584 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:12.584 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:12.584 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:28:12.584 Cannot find device "nvmf_init_br" 00:28:12.584 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:28:12.584 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:28:12.584 Cannot find device "nvmf_init_br2" 00:28:12.584 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:28:12.584 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:28:12.843 Cannot find device "nvmf_tgt_br" 00:28:12.843 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:28:12.843 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:28:12.843 Cannot find device "nvmf_tgt_br2" 00:28:12.843 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:28:12.843 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:28:12.843 Cannot find device "nvmf_init_br" 00:28:12.843 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:28:12.843 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:28:12.843 Cannot find device "nvmf_init_br2" 00:28:12.843 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:28:12.843 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:28:12.843 Cannot find device "nvmf_tgt_br" 00:28:12.843 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:28:12.843 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:28:12.843 Cannot find device "nvmf_tgt_br2" 00:28:12.843 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:28:12.843 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:28:12.843 Cannot find device "nvmf_br" 00:28:12.843 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:28:12.843 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:28:12.843 Cannot find device "nvmf_init_if" 00:28:12.843 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:28:12.843 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:28:12.843 Cannot find device "nvmf_init_if2" 00:28:12.843 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:28:12.843 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:12.843 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:12.843 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:28:12.843 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:12.843 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:12.843 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:28:12.843 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:28:12.843 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:12.843 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:28:12.843 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:12.843 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:12.843 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:12.843 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:12.843 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:12.843 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:28:12.843 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:28:12.843 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:28:12.843 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:28:12.843 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:28:12.843 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:28:12.843 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:28:12.843 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:28:12.843 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:28:12.843 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:12.843 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:12.843 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:12.843 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:28:12.843 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:28:12.843 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:28:13.101 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:28:13.101 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:13.101 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:13.101 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:13.101 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:28:13.101 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:28:13.101 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:28:13.101 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:13.101 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:28:13.101 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:28:13.101 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:13.101 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.130 ms 00:28:13.101 00:28:13.101 --- 10.0.0.3 ping statistics --- 00:28:13.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:13.101 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:28:13.101 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:28:13.101 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:28:13.101 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:28:13.101 00:28:13.101 --- 10.0.0.4 ping statistics --- 00:28:13.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:13.101 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:28:13.101 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:13.101 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:13.101 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:28:13.101 00:28:13.101 --- 10.0.0.1 ping statistics --- 00:28:13.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:13.102 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:28:13.102 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:28:13.102 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:13.102 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:28:13.102 00:28:13.102 --- 10.0.0.2 ping statistics --- 00:28:13.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:13.102 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:28:13.102 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:13.102 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:28:13.102 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:13.102 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:13.102 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:13.102 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:13.102 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:13.102 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:13.102 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:13.102 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:28:13.102 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:13.102 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:13.102 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:28:13.102 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=105184 00:28:13.102 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 105184 00:28:13.102 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:28:13.102 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 105184 ']' 00:28:13.102 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:13.102 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:13.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:13.102 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:13.102 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:13.102 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:28:13.102 [2024-11-20 11:40:58.596210] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:13.102 [2024-11-20 11:40:58.597265] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:28:13.102 [2024-11-20 11:40:58.597329] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:13.360 [2024-11-20 11:40:58.741873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:13.360 [2024-11-20 11:40:58.784218] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:13.360 [2024-11-20 11:40:58.784290] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:13.360 [2024-11-20 11:40:58.784307] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:13.360 [2024-11-20 11:40:58.784316] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:13.360 [2024-11-20 11:40:58.784325] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:13.360 [2024-11-20 11:40:58.785228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:13.360 [2024-11-20 11:40:58.785318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:13.360 [2024-11-20 11:40:58.785446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:13.360 [2024-11-20 11:40:58.785451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:13.360 [2024-11-20 11:40:58.844231] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:13.360 [2024-11-20 11:40:58.844290] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:13.360 [2024-11-20 11:40:58.845148] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:28:13.360 [2024-11-20 11:40:58.845422] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:13.360 [2024-11-20 11:40:58.845832] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:13.360 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:13.360 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:28:13.361 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:13.361 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:13.361 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:28:13.361 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:13.361 11:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:13.927 [2024-11-20 11:40:59.218328] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:13.927 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:14.185 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:28:14.185 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:14.443 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:28:14.443 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:14.701 11:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:28:14.701 11:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:15.281 11:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:28:15.281 11:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:28:15.552 11:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:15.810 11:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:28:15.810 11:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:16.068 11:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:28:16.068 11:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:16.635 11:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:28:16.635 11:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:28:16.894 11:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:28:17.152 11:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:28:17.152 11:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:17.411 11:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:28:17.411 11:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:17.670 11:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:28:17.929 [2024-11-20 11:41:03.326469] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:28:17.929 11:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:28:18.188 11:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:28:18.446 11:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid=806d442c-08ba-4719-b76d-33169283459a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:28:18.704 11:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:28:18.704 11:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:28:18.704 11:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:28:18.704 11:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:28:18.704 11:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:28:18.704 11:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:28:20.605 11:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:28:20.605 11:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:28:20.605 11:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:28:20.605 11:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:28:20.605 11:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:28:20.605 11:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:28:20.605 11:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:28:20.605 [global] 00:28:20.605 thread=1 00:28:20.605 invalidate=1 00:28:20.605 rw=write 00:28:20.605 time_based=1 00:28:20.605 runtime=1 00:28:20.605 ioengine=libaio 00:28:20.605 direct=1 00:28:20.605 bs=4096 00:28:20.605 iodepth=1 00:28:20.605 norandommap=0 00:28:20.605 numjobs=1 00:28:20.605 00:28:20.605 verify_dump=1 00:28:20.605 verify_backlog=512 00:28:20.605 verify_state_save=0 00:28:20.605 do_verify=1 00:28:20.605 verify=crc32c-intel 00:28:20.605 [job0] 00:28:20.605 filename=/dev/nvme0n1 00:28:20.605 [job1] 00:28:20.605 filename=/dev/nvme0n2 00:28:20.605 [job2] 00:28:20.605 filename=/dev/nvme0n3 00:28:20.605 [job3] 00:28:20.605 filename=/dev/nvme0n4 00:28:20.605 Could not set queue depth (nvme0n1) 00:28:20.605 Could not set queue depth (nvme0n2) 00:28:20.605 Could not set queue depth (nvme0n3) 00:28:20.605 Could not set queue depth (nvme0n4) 00:28:20.863 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:28:20.863 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:28:20.863 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:28:20.863 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:28:20.863 fio-3.35 00:28:20.863 Starting 4 threads 00:28:22.238 00:28:22.238 job0: (groupid=0, jobs=1): err= 0: pid=105470: Wed Nov 20 11:41:07 2024 00:28:22.238 read: IOPS=2201, BW=8807KiB/s (9019kB/s)(8816KiB/1001msec) 00:28:22.238 slat (nsec): min=13036, max=53531, avg=18318.52, stdev=6972.52 00:28:22.238 clat (usec): min=170, max=1787, avg=216.34, stdev=46.75 00:28:22.238 lat (usec): min=184, max=1801, avg=234.66, stdev=51.18 00:28:22.238 clat percentiles (usec): 00:28:22.238 | 1.00th=[ 178], 5.00th=[ 184], 10.00th=[ 186], 20.00th=[ 192], 00:28:22.238 | 30.00th=[ 194], 40.00th=[ 198], 50.00th=[ 202], 60.00th=[ 208], 00:28:22.238 | 70.00th=[ 219], 80.00th=[ 253], 90.00th=[ 269], 95.00th=[ 281], 00:28:22.238 | 99.00th=[ 302], 99.50th=[ 306], 99.90th=[ 343], 99.95th=[ 375], 00:28:22.238 | 99.99th=[ 1795] 00:28:22.238 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:28:22.238 slat (usec): min=18, max=103, avg=26.38, stdev= 9.18 00:28:22.238 clat (usec): min=121, max=962, avg=158.39, stdev=33.39 00:28:22.238 lat (usec): min=141, max=1016, avg=184.76, stdev=40.07 00:28:22.238 clat percentiles (usec): 00:28:22.238 | 1.00th=[ 126], 5.00th=[ 130], 10.00th=[ 133], 20.00th=[ 137], 00:28:22.238 | 30.00th=[ 141], 40.00th=[ 143], 50.00th=[ 147], 60.00th=[ 153], 00:28:22.238 | 70.00th=[ 163], 80.00th=[ 186], 90.00th=[ 202], 95.00th=[ 210], 00:28:22.238 | 99.00th=[ 231], 99.50th=[ 239], 99.90th=[ 437], 99.95th=[ 725], 00:28:22.238 | 99.99th=[ 963] 00:28:22.238 bw ( KiB/s): min= 8904, max= 8904, per=20.89%, avg=8904.00, stdev= 0.00, samples=1 00:28:22.238 iops : min= 2226, max= 2226, avg=2226.00, stdev= 0.00, samples=1 00:28:22.238 lat (usec) : 250=89.95%, 500=9.99%, 750=0.02%, 1000=0.02% 00:28:22.238 lat (msec) : 2=0.02% 00:28:22.238 cpu : usr=2.60%, sys=7.70%, ctx=4764, majf=0, minf=15 00:28:22.238 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:22.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:22.238 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:22.238 issued rwts: total=2204,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:22.238 latency : target=0, window=0, percentile=100.00%, depth=1 00:28:22.238 job1: (groupid=0, jobs=1): err= 0: pid=105471: Wed Nov 20 11:41:07 2024 00:28:22.238 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:28:22.238 slat (nsec): min=12787, max=43531, avg=14666.67, stdev=2358.26 00:28:22.238 clat (usec): min=154, max=631, avg=186.25, stdev=14.73 00:28:22.238 lat (usec): min=170, max=645, avg=200.92, stdev=14.93 00:28:22.238 clat percentiles (usec): 00:28:22.238 | 1.00th=[ 163], 5.00th=[ 169], 10.00th=[ 174], 20.00th=[ 178], 00:28:22.238 | 30.00th=[ 182], 40.00th=[ 184], 50.00th=[ 186], 60.00th=[ 188], 00:28:22.238 | 70.00th=[ 192], 80.00th=[ 194], 90.00th=[ 200], 95.00th=[ 204], 00:28:22.238 | 99.00th=[ 215], 99.50th=[ 221], 99.90th=[ 273], 99.95th=[ 424], 00:28:22.238 | 99.99th=[ 635] 00:28:22.238 write: IOPS=2957, BW=11.6MiB/s (12.1MB/s)(11.6MiB/1001msec); 0 zone resets 00:28:22.238 slat (usec): min=18, max=126, avg=21.40, stdev= 4.13 00:28:22.238 clat (usec): min=112, max=1985, avg=140.03, stdev=36.86 00:28:22.238 lat (usec): min=133, max=2005, avg=161.43, stdev=37.13 00:28:22.238 clat percentiles (usec): 00:28:22.238 | 1.00th=[ 119], 5.00th=[ 124], 10.00th=[ 127], 20.00th=[ 131], 00:28:22.238 | 30.00th=[ 133], 40.00th=[ 137], 50.00th=[ 139], 60.00th=[ 141], 00:28:22.238 | 70.00th=[ 143], 80.00th=[ 147], 90.00th=[ 153], 95.00th=[ 161], 00:28:22.238 | 99.00th=[ 188], 99.50th=[ 200], 99.90th=[ 285], 99.95th=[ 529], 00:28:22.238 | 99.99th=[ 1991] 00:28:22.238 bw ( KiB/s): min=12288, max=12288, per=28.83%, avg=12288.00, stdev= 0.00, samples=1 00:28:22.238 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:28:22.238 lat (usec) : 250=99.87%, 500=0.07%, 750=0.04% 00:28:22.238 lat (msec) : 2=0.02% 00:28:22.238 cpu : usr=1.70%, sys=7.80%, ctx=5520, majf=0, minf=13 00:28:22.238 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:22.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:22.238 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:22.238 issued rwts: total=2560,2960,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:22.238 latency : target=0, window=0, percentile=100.00%, depth=1 00:28:22.238 job2: (groupid=0, jobs=1): err= 0: pid=105472: Wed Nov 20 11:41:07 2024 00:28:22.238 read: IOPS=2534, BW=9.90MiB/s (10.4MB/s)(9.91MiB/1001msec) 00:28:22.238 slat (nsec): min=12479, max=52449, avg=15709.50, stdev=3720.56 00:28:22.238 clat (usec): min=174, max=1508, avg=201.14, stdev=30.73 00:28:22.238 lat (usec): min=189, max=1522, avg=216.84, stdev=31.09 00:28:22.238 clat percentiles (usec): 00:28:22.238 | 1.00th=[ 180], 5.00th=[ 184], 10.00th=[ 188], 20.00th=[ 190], 00:28:22.238 | 30.00th=[ 194], 40.00th=[ 196], 50.00th=[ 198], 60.00th=[ 202], 00:28:22.238 | 70.00th=[ 206], 80.00th=[ 210], 90.00th=[ 217], 95.00th=[ 221], 00:28:22.238 | 99.00th=[ 243], 99.50th=[ 277], 99.90th=[ 392], 99.95th=[ 644], 00:28:22.238 | 99.99th=[ 1516] 00:28:22.238 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:28:22.238 slat (usec): min=18, max=117, avg=22.22, stdev= 4.75 00:28:22.238 clat (usec): min=126, max=305, avg=150.19, stdev=12.78 00:28:22.238 lat (usec): min=149, max=422, avg=172.42, stdev=14.34 00:28:22.238 clat percentiles (usec): 00:28:22.238 | 1.00th=[ 133], 5.00th=[ 135], 10.00th=[ 137], 20.00th=[ 141], 00:28:22.239 | 30.00th=[ 143], 40.00th=[ 147], 50.00th=[ 149], 60.00th=[ 151], 00:28:22.239 | 70.00th=[ 155], 80.00th=[ 159], 90.00th=[ 165], 95.00th=[ 172], 00:28:22.239 | 99.00th=[ 186], 99.50th=[ 194], 99.90th=[ 269], 99.95th=[ 277], 00:28:22.239 | 99.99th=[ 306] 00:28:22.239 bw ( KiB/s): min=12288, max=12288, per=28.83%, avg=12288.00, stdev= 0.00, samples=1 00:28:22.239 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:28:22.239 lat (usec) : 250=99.51%, 500=0.45%, 750=0.02% 00:28:22.239 lat (msec) : 2=0.02% 00:28:22.239 cpu : usr=1.90%, sys=7.10%, ctx=5097, majf=0, minf=5 00:28:22.239 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:22.239 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:22.239 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:22.239 issued rwts: total=2537,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:22.239 latency : target=0, window=0, percentile=100.00%, depth=1 00:28:22.239 job3: (groupid=0, jobs=1): err= 0: pid=105473: Wed Nov 20 11:41:07 2024 00:28:22.239 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:28:22.239 slat (nsec): min=12055, max=39964, avg=15177.42, stdev=3022.14 00:28:22.239 clat (usec): min=166, max=2150, avg=199.66, stdev=46.81 00:28:22.239 lat (usec): min=180, max=2180, avg=214.84, stdev=47.19 00:28:22.239 clat percentiles (usec): 00:28:22.239 | 1.00th=[ 176], 5.00th=[ 182], 10.00th=[ 184], 20.00th=[ 188], 00:28:22.239 | 30.00th=[ 190], 40.00th=[ 194], 50.00th=[ 196], 60.00th=[ 200], 00:28:22.239 | 70.00th=[ 202], 80.00th=[ 206], 90.00th=[ 212], 95.00th=[ 221], 00:28:22.239 | 99.00th=[ 269], 99.50th=[ 297], 99.90th=[ 619], 99.95th=[ 1045], 00:28:22.239 | 99.99th=[ 2147] 00:28:22.239 write: IOPS=2582, BW=10.1MiB/s (10.6MB/s)(10.1MiB/1001msec); 0 zone resets 00:28:22.239 slat (usec): min=18, max=116, avg=22.36, stdev= 5.22 00:28:22.239 clat (usec): min=122, max=346, avg=148.33, stdev=12.96 00:28:22.239 lat (usec): min=141, max=424, avg=170.69, stdev=14.68 00:28:22.239 clat percentiles (usec): 00:28:22.239 | 1.00th=[ 128], 5.00th=[ 133], 10.00th=[ 137], 20.00th=[ 139], 00:28:22.239 | 30.00th=[ 143], 40.00th=[ 145], 50.00th=[ 147], 60.00th=[ 149], 00:28:22.239 | 70.00th=[ 153], 80.00th=[ 157], 90.00th=[ 163], 95.00th=[ 169], 00:28:22.239 | 99.00th=[ 186], 99.50th=[ 204], 99.90th=[ 265], 99.95th=[ 310], 00:28:22.239 | 99.99th=[ 347] 00:28:22.239 bw ( KiB/s): min=12288, max=12288, per=28.83%, avg=12288.00, stdev= 0.00, samples=1 00:28:22.239 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:28:22.239 lat (usec) : 250=99.26%, 500=0.64%, 750=0.06% 00:28:22.239 lat (msec) : 2=0.02%, 4=0.02% 00:28:22.239 cpu : usr=2.00%, sys=7.10%, ctx=5145, majf=0, minf=5 00:28:22.239 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:22.239 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:22.239 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:22.239 issued rwts: total=2560,2585,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:22.239 latency : target=0, window=0, percentile=100.00%, depth=1 00:28:22.239 00:28:22.239 Run status group 0 (all jobs): 00:28:22.239 READ: bw=38.5MiB/s (40.3MB/s), 8807KiB/s-9.99MiB/s (9019kB/s-10.5MB/s), io=38.5MiB (40.4MB), run=1001-1001msec 00:28:22.239 WRITE: bw=41.6MiB/s (43.6MB/s), 9.99MiB/s-11.6MiB/s (10.5MB/s-12.1MB/s), io=41.7MiB (43.7MB), run=1001-1001msec 00:28:22.239 00:28:22.239 Disk stats (read/write): 00:28:22.239 nvme0n1: ios=2010/2048, merge=0/0, ticks=489/350, in_queue=839, util=89.38% 00:28:22.239 nvme0n2: ios=2247/2560, merge=0/0, ticks=457/375, in_queue=832, util=90.17% 00:28:22.239 nvme0n3: ios=2048/2372, merge=0/0, ticks=415/373, in_queue=788, util=89.13% 00:28:22.239 nvme0n4: ios=2048/2410, merge=0/0, ticks=417/385, in_queue=802, util=89.69% 00:28:22.239 11:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:28:22.239 [global] 00:28:22.239 thread=1 00:28:22.239 invalidate=1 00:28:22.239 rw=randwrite 00:28:22.239 time_based=1 00:28:22.239 runtime=1 00:28:22.239 ioengine=libaio 00:28:22.239 direct=1 00:28:22.239 bs=4096 00:28:22.239 iodepth=1 00:28:22.239 norandommap=0 00:28:22.239 numjobs=1 00:28:22.239 00:28:22.239 verify_dump=1 00:28:22.239 verify_backlog=512 00:28:22.239 verify_state_save=0 00:28:22.239 do_verify=1 00:28:22.239 verify=crc32c-intel 00:28:22.239 [job0] 00:28:22.239 filename=/dev/nvme0n1 00:28:22.239 [job1] 00:28:22.239 filename=/dev/nvme0n2 00:28:22.239 [job2] 00:28:22.239 filename=/dev/nvme0n3 00:28:22.239 [job3] 00:28:22.239 filename=/dev/nvme0n4 00:28:22.239 Could not set queue depth (nvme0n1) 00:28:22.239 Could not set queue depth (nvme0n2) 00:28:22.239 Could not set queue depth (nvme0n3) 00:28:22.239 Could not set queue depth (nvme0n4) 00:28:22.239 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:28:22.239 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:28:22.239 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:28:22.239 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:28:22.239 fio-3.35 00:28:22.239 Starting 4 threads 00:28:23.616 00:28:23.616 job0: (groupid=0, jobs=1): err= 0: pid=105526: Wed Nov 20 11:41:08 2024 00:28:23.616 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:28:23.616 slat (nsec): min=12069, max=54079, avg=18252.70, stdev=5973.04 00:28:23.616 clat (usec): min=177, max=2104, avg=304.55, stdev=65.89 00:28:23.616 lat (usec): min=204, max=2120, avg=322.81, stdev=66.22 00:28:23.616 clat percentiles (usec): 00:28:23.616 | 1.00th=[ 202], 5.00th=[ 223], 10.00th=[ 273], 20.00th=[ 285], 00:28:23.616 | 30.00th=[ 293], 40.00th=[ 297], 50.00th=[ 302], 60.00th=[ 306], 00:28:23.616 | 70.00th=[ 310], 80.00th=[ 314], 90.00th=[ 334], 95.00th=[ 396], 00:28:23.616 | 99.00th=[ 424], 99.50th=[ 449], 99.90th=[ 1156], 99.95th=[ 2114], 00:28:23.616 | 99.99th=[ 2114] 00:28:23.616 write: IOPS=1966, BW=7864KiB/s (8053kB/s)(7872KiB/1001msec); 0 zone resets 00:28:23.616 slat (nsec): min=18362, max=90339, avg=28293.45, stdev=8498.13 00:28:23.616 clat (usec): min=121, max=1125, avg=223.91, stdev=33.72 00:28:23.616 lat (usec): min=152, max=1159, avg=252.21, stdev=33.83 00:28:23.616 clat percentiles (usec): 00:28:23.616 | 1.00th=[ 176], 5.00th=[ 196], 10.00th=[ 204], 20.00th=[ 210], 00:28:23.616 | 30.00th=[ 215], 40.00th=[ 219], 50.00th=[ 221], 60.00th=[ 225], 00:28:23.616 | 70.00th=[ 229], 80.00th=[ 235], 90.00th=[ 245], 95.00th=[ 258], 00:28:23.616 | 99.00th=[ 306], 99.50th=[ 347], 99.90th=[ 693], 99.95th=[ 1123], 00:28:23.616 | 99.99th=[ 1123] 00:28:23.616 bw ( KiB/s): min= 8192, max= 8192, per=22.47%, avg=8192.00, stdev= 0.00, samples=1 00:28:23.616 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:28:23.616 lat (usec) : 250=55.02%, 500=44.69%, 750=0.20% 00:28:23.616 lat (msec) : 2=0.06%, 4=0.03% 00:28:23.616 cpu : usr=1.90%, sys=6.10%, ctx=3504, majf=0, minf=11 00:28:23.616 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:23.616 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.616 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.616 issued rwts: total=1536,1968,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.616 latency : target=0, window=0, percentile=100.00%, depth=1 00:28:23.616 job1: (groupid=0, jobs=1): err= 0: pid=105527: Wed Nov 20 11:41:08 2024 00:28:23.616 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:28:23.616 slat (nsec): min=11923, max=63435, avg=18976.27, stdev=6739.67 00:28:23.616 clat (usec): min=170, max=687, avg=302.21, stdev=45.30 00:28:23.616 lat (usec): min=184, max=709, avg=321.19, stdev=45.54 00:28:23.616 clat percentiles (usec): 00:28:23.616 | 1.00th=[ 196], 5.00th=[ 215], 10.00th=[ 269], 20.00th=[ 285], 00:28:23.616 | 30.00th=[ 289], 40.00th=[ 297], 50.00th=[ 302], 60.00th=[ 306], 00:28:23.616 | 70.00th=[ 310], 80.00th=[ 318], 90.00th=[ 343], 95.00th=[ 404], 00:28:23.616 | 99.00th=[ 429], 99.50th=[ 441], 99.90th=[ 529], 99.95th=[ 685], 00:28:23.616 | 99.99th=[ 685] 00:28:23.616 write: IOPS=1985, BW=7940KiB/s (8131kB/s)(7948KiB/1001msec); 0 zone resets 00:28:23.616 slat (usec): min=18, max=108, avg=29.21, stdev=10.49 00:28:23.616 clat (usec): min=115, max=585, avg=221.87, stdev=25.44 00:28:23.616 lat (usec): min=139, max=605, avg=251.08, stdev=23.50 00:28:23.616 clat percentiles (usec): 00:28:23.616 | 1.00th=[ 165], 5.00th=[ 186], 10.00th=[ 196], 20.00th=[ 208], 00:28:23.616 | 30.00th=[ 212], 40.00th=[ 217], 50.00th=[ 221], 60.00th=[ 225], 00:28:23.616 | 70.00th=[ 229], 80.00th=[ 237], 90.00th=[ 247], 95.00th=[ 258], 00:28:23.616 | 99.00th=[ 297], 99.50th=[ 326], 99.90th=[ 461], 99.95th=[ 586], 00:28:23.616 | 99.99th=[ 586] 00:28:23.616 bw ( KiB/s): min= 8192, max= 8192, per=22.47%, avg=8192.00, stdev= 0.00, samples=1 00:28:23.616 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:28:23.616 lat (usec) : 250=55.52%, 500=44.34%, 750=0.14% 00:28:23.616 cpu : usr=2.30%, sys=6.00%, ctx=3531, majf=0, minf=9 00:28:23.616 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:23.616 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.616 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.617 issued rwts: total=1536,1987,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.617 latency : target=0, window=0, percentile=100.00%, depth=1 00:28:23.617 job2: (groupid=0, jobs=1): err= 0: pid=105528: Wed Nov 20 11:41:08 2024 00:28:23.617 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:28:23.617 slat (nsec): min=12832, max=53271, avg=15373.47, stdev=3331.78 00:28:23.617 clat (usec): min=167, max=1967, avg=202.30, stdev=46.91 00:28:23.617 lat (usec): min=181, max=1981, avg=217.67, stdev=47.31 00:28:23.617 clat percentiles (usec): 00:28:23.617 | 1.00th=[ 180], 5.00th=[ 184], 10.00th=[ 188], 20.00th=[ 190], 00:28:23.617 | 30.00th=[ 192], 40.00th=[ 196], 50.00th=[ 198], 60.00th=[ 200], 00:28:23.617 | 70.00th=[ 204], 80.00th=[ 208], 90.00th=[ 215], 95.00th=[ 225], 00:28:23.617 | 99.00th=[ 289], 99.50th=[ 314], 99.90th=[ 947], 99.95th=[ 1123], 00:28:23.617 | 99.99th=[ 1975] 00:28:23.617 write: IOPS=2574, BW=10.1MiB/s (10.5MB/s)(10.1MiB/1001msec); 0 zone resets 00:28:23.617 slat (nsec): min=18950, max=97419, avg=22663.29, stdev=4705.77 00:28:23.617 clat (usec): min=127, max=573, avg=145.92, stdev=14.13 00:28:23.617 lat (usec): min=148, max=593, avg=168.59, stdev=15.77 00:28:23.617 clat percentiles (usec): 00:28:23.617 | 1.00th=[ 131], 5.00th=[ 135], 10.00th=[ 135], 20.00th=[ 139], 00:28:23.617 | 30.00th=[ 141], 40.00th=[ 143], 50.00th=[ 145], 60.00th=[ 147], 00:28:23.617 | 70.00th=[ 149], 80.00th=[ 153], 90.00th=[ 159], 95.00th=[ 165], 00:28:23.617 | 99.00th=[ 186], 99.50th=[ 210], 99.90th=[ 241], 99.95th=[ 310], 00:28:23.617 | 99.99th=[ 570] 00:28:23.617 bw ( KiB/s): min=12288, max=12288, per=33.70%, avg=12288.00, stdev= 0.00, samples=1 00:28:23.617 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:28:23.617 lat (usec) : 250=98.27%, 500=1.64%, 750=0.04%, 1000=0.02% 00:28:23.617 lat (msec) : 2=0.04% 00:28:23.617 cpu : usr=2.00%, sys=7.20%, ctx=5137, majf=0, minf=7 00:28:23.617 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:23.617 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.617 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.617 issued rwts: total=2560,2577,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.617 latency : target=0, window=0, percentile=100.00%, depth=1 00:28:23.617 job3: (groupid=0, jobs=1): err= 0: pid=105529: Wed Nov 20 11:41:08 2024 00:28:23.617 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:28:23.617 slat (nsec): min=12890, max=45925, avg=16073.35, stdev=3847.45 00:28:23.617 clat (usec): min=174, max=1815, avg=201.01, stdev=46.89 00:28:23.617 lat (usec): min=187, max=1831, avg=217.08, stdev=47.31 00:28:23.617 clat percentiles (usec): 00:28:23.617 | 1.00th=[ 182], 5.00th=[ 186], 10.00th=[ 188], 20.00th=[ 190], 00:28:23.617 | 30.00th=[ 192], 40.00th=[ 196], 50.00th=[ 198], 60.00th=[ 200], 00:28:23.617 | 70.00th=[ 204], 80.00th=[ 206], 90.00th=[ 215], 95.00th=[ 221], 00:28:23.617 | 99.00th=[ 273], 99.50th=[ 302], 99.90th=[ 506], 99.95th=[ 1729], 00:28:23.617 | 99.99th=[ 1811] 00:28:23.617 write: IOPS=2590, BW=10.1MiB/s (10.6MB/s)(10.1MiB/1001msec); 0 zone resets 00:28:23.617 slat (nsec): min=18935, max=75941, avg=22533.31, stdev=3906.12 00:28:23.617 clat (usec): min=128, max=304, avg=145.35, stdev= 9.42 00:28:23.617 lat (usec): min=147, max=380, avg=167.88, stdev=10.78 00:28:23.617 clat percentiles (usec): 00:28:23.617 | 1.00th=[ 133], 5.00th=[ 135], 10.00th=[ 137], 20.00th=[ 139], 00:28:23.617 | 30.00th=[ 141], 40.00th=[ 143], 50.00th=[ 145], 60.00th=[ 147], 00:28:23.617 | 70.00th=[ 149], 80.00th=[ 153], 90.00th=[ 157], 95.00th=[ 163], 00:28:23.617 | 99.00th=[ 172], 99.50th=[ 178], 99.90th=[ 192], 99.95th=[ 198], 00:28:23.617 | 99.99th=[ 306] 00:28:23.617 bw ( KiB/s): min=12288, max=12288, per=33.70%, avg=12288.00, stdev= 0.00, samples=1 00:28:23.617 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:28:23.617 lat (usec) : 250=99.24%, 500=0.70%, 750=0.02% 00:28:23.617 lat (msec) : 2=0.04% 00:28:23.617 cpu : usr=2.40%, sys=7.00%, ctx=5153, majf=0, minf=18 00:28:23.617 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:23.617 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.617 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.617 issued rwts: total=2560,2593,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.617 latency : target=0, window=0, percentile=100.00%, depth=1 00:28:23.617 00:28:23.617 Run status group 0 (all jobs): 00:28:23.617 READ: bw=32.0MiB/s (33.5MB/s), 6138KiB/s-9.99MiB/s (6285kB/s-10.5MB/s), io=32.0MiB (33.6MB), run=1001-1001msec 00:28:23.617 WRITE: bw=35.6MiB/s (37.3MB/s), 7864KiB/s-10.1MiB/s (8053kB/s-10.6MB/s), io=35.6MiB (37.4MB), run=1001-1001msec 00:28:23.617 00:28:23.617 Disk stats (read/write): 00:28:23.617 nvme0n1: ios=1455/1536, merge=0/0, ticks=467/361, in_queue=828, util=87.44% 00:28:23.617 nvme0n2: ios=1443/1536, merge=0/0, ticks=470/357, in_queue=827, util=87.41% 00:28:23.617 nvme0n3: ios=2048/2347, merge=0/0, ticks=424/364, in_queue=788, util=89.11% 00:28:23.617 nvme0n4: ios=2048/2357, merge=0/0, ticks=425/353, in_queue=778, util=89.67% 00:28:23.617 11:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:28:23.617 [global] 00:28:23.617 thread=1 00:28:23.617 invalidate=1 00:28:23.617 rw=write 00:28:23.617 time_based=1 00:28:23.617 runtime=1 00:28:23.617 ioengine=libaio 00:28:23.617 direct=1 00:28:23.617 bs=4096 00:28:23.617 iodepth=128 00:28:23.617 norandommap=0 00:28:23.617 numjobs=1 00:28:23.617 00:28:23.617 verify_dump=1 00:28:23.617 verify_backlog=512 00:28:23.617 verify_state_save=0 00:28:23.617 do_verify=1 00:28:23.617 verify=crc32c-intel 00:28:23.617 [job0] 00:28:23.617 filename=/dev/nvme0n1 00:28:23.617 [job1] 00:28:23.617 filename=/dev/nvme0n2 00:28:23.617 [job2] 00:28:23.617 filename=/dev/nvme0n3 00:28:23.617 [job3] 00:28:23.617 filename=/dev/nvme0n4 00:28:23.617 Could not set queue depth (nvme0n1) 00:28:23.617 Could not set queue depth (nvme0n2) 00:28:23.617 Could not set queue depth (nvme0n3) 00:28:23.617 Could not set queue depth (nvme0n4) 00:28:23.617 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:28:23.617 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:28:23.617 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:28:23.617 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:28:23.617 fio-3.35 00:28:23.617 Starting 4 threads 00:28:25.000 00:28:25.000 job0: (groupid=0, jobs=1): err= 0: pid=105590: Wed Nov 20 11:41:10 2024 00:28:25.000 read: IOPS=2544, BW=9.94MiB/s (10.4MB/s)(10.0MiB/1006msec) 00:28:25.000 slat (usec): min=5, max=10201, avg=183.68, stdev=759.72 00:28:25.000 clat (usec): min=15607, max=45903, avg=22919.32, stdev=4275.61 00:28:25.000 lat (usec): min=16761, max=45917, avg=23102.99, stdev=4286.43 00:28:25.000 clat percentiles (usec): 00:28:25.000 | 1.00th=[17433], 5.00th=[18744], 10.00th=[19530], 20.00th=[20055], 00:28:25.000 | 30.00th=[20841], 40.00th=[21365], 50.00th=[21890], 60.00th=[22938], 00:28:25.000 | 70.00th=[23462], 80.00th=[24249], 90.00th=[26084], 95.00th=[31327], 00:28:25.000 | 99.00th=[41681], 99.50th=[43254], 99.90th=[45876], 99.95th=[45876], 00:28:25.000 | 99.99th=[45876] 00:28:25.000 write: IOPS=3032, BW=11.8MiB/s (12.4MB/s)(11.9MiB/1006msec); 0 zone resets 00:28:25.000 slat (usec): min=9, max=6223, avg=165.93, stdev=594.27 00:28:25.000 clat (usec): min=5259, max=36070, avg=22360.77, stdev=2775.36 00:28:25.000 lat (usec): min=5896, max=36093, avg=22526.70, stdev=2726.50 00:28:25.000 clat percentiles (usec): 00:28:25.000 | 1.00th=[10814], 5.00th=[17433], 10.00th=[20055], 20.00th=[21103], 00:28:25.000 | 30.00th=[21890], 40.00th=[22152], 50.00th=[22414], 60.00th=[22676], 00:28:25.000 | 70.00th=[22938], 80.00th=[23725], 90.00th=[25297], 95.00th=[26608], 00:28:25.000 | 99.00th=[29754], 99.50th=[31327], 99.90th=[31327], 99.95th=[35914], 00:28:25.000 | 99.99th=[35914] 00:28:25.000 bw ( KiB/s): min=11104, max=12288, per=18.55%, avg=11696.00, stdev=837.21, samples=2 00:28:25.000 iops : min= 2776, max= 3072, avg=2924.00, stdev=209.30, samples=2 00:28:25.000 lat (msec) : 10=0.39%, 20=12.39%, 50=87.22% 00:28:25.000 cpu : usr=2.89%, sys=8.56%, ctx=913, majf=0, minf=15 00:28:25.000 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:28:25.000 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:25.000 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:25.000 issued rwts: total=2560,3051,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:25.000 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:25.000 job1: (groupid=0, jobs=1): err= 0: pid=105591: Wed Nov 20 11:41:10 2024 00:28:25.000 read: IOPS=2544, BW=9.94MiB/s (10.4MB/s)(10.0MiB/1006msec) 00:28:25.000 slat (usec): min=2, max=7706, avg=182.10, stdev=732.74 00:28:25.000 clat (usec): min=17824, max=37672, avg=22894.02, stdev=2777.88 00:28:25.000 lat (usec): min=17919, max=37688, avg=23076.12, stdev=2784.27 00:28:25.000 clat percentiles (usec): 00:28:25.000 | 1.00th=[18482], 5.00th=[19530], 10.00th=[20055], 20.00th=[20841], 00:28:25.000 | 30.00th=[21365], 40.00th=[22152], 50.00th=[22676], 60.00th=[23200], 00:28:25.000 | 70.00th=[23725], 80.00th=[24249], 90.00th=[25560], 95.00th=[26346], 00:28:25.000 | 99.00th=[35390], 99.50th=[36963], 99.90th=[37487], 99.95th=[37487], 00:28:25.000 | 99.99th=[37487] 00:28:25.000 write: IOPS=2957, BW=11.6MiB/s (12.1MB/s)(11.6MiB/1006msec); 0 zone resets 00:28:25.000 slat (usec): min=12, max=6182, avg=171.70, stdev=636.92 00:28:25.000 clat (usec): min=5303, max=37024, avg=22789.48, stdev=2952.78 00:28:25.000 lat (usec): min=5327, max=37040, avg=22961.19, stdev=2911.55 00:28:25.000 clat percentiles (usec): 00:28:25.000 | 1.00th=[11076], 5.00th=[19268], 10.00th=[20841], 20.00th=[21627], 00:28:25.000 | 30.00th=[22152], 40.00th=[22414], 50.00th=[22676], 60.00th=[22938], 00:28:25.000 | 70.00th=[23462], 80.00th=[24249], 90.00th=[25297], 95.00th=[26870], 00:28:25.000 | 99.00th=[32113], 99.50th=[32900], 99.90th=[35914], 99.95th=[36963], 00:28:25.000 | 99.99th=[36963] 00:28:25.000 bw ( KiB/s): min=10496, max=12288, per=18.07%, avg=11392.00, stdev=1267.14, samples=2 00:28:25.000 iops : min= 2624, max= 3072, avg=2848.00, stdev=316.78, samples=2 00:28:25.000 lat (msec) : 10=0.52%, 20=7.08%, 50=92.39% 00:28:25.000 cpu : usr=2.59%, sys=8.66%, ctx=883, majf=0, minf=19 00:28:25.000 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:28:25.000 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:25.000 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:25.000 issued rwts: total=2560,2975,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:25.000 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:25.000 job2: (groupid=0, jobs=1): err= 0: pid=105592: Wed Nov 20 11:41:10 2024 00:28:25.000 read: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec) 00:28:25.000 slat (usec): min=5, max=6904, avg=101.02, stdev=525.37 00:28:25.000 clat (usec): min=7376, max=22670, avg=13297.22, stdev=2189.23 00:28:25.000 lat (usec): min=7397, max=22706, avg=13398.24, stdev=2208.21 00:28:25.000 clat percentiles (usec): 00:28:25.000 | 1.00th=[ 8848], 5.00th=[10421], 10.00th=[10814], 20.00th=[11469], 00:28:25.000 | 30.00th=[12125], 40.00th=[12518], 50.00th=[13042], 60.00th=[13566], 00:28:25.000 | 70.00th=[14091], 80.00th=[14877], 90.00th=[16057], 95.00th=[17171], 00:28:25.000 | 99.00th=[20317], 99.50th=[21365], 99.90th=[22152], 99.95th=[22152], 00:28:25.000 | 99.99th=[22676] 00:28:25.000 write: IOPS=4913, BW=19.2MiB/s (20.1MB/s)(19.3MiB/1006msec); 0 zone resets 00:28:25.000 slat (usec): min=10, max=6832, avg=100.13, stdev=482.99 00:28:25.000 clat (usec): min=5195, max=22908, avg=13323.53, stdev=2101.07 00:28:25.000 lat (usec): min=5827, max=22927, avg=13423.66, stdev=2144.12 00:28:25.000 clat percentiles (usec): 00:28:25.000 | 1.00th=[ 8586], 5.00th=[10683], 10.00th=[11469], 20.00th=[11994], 00:28:25.000 | 30.00th=[12256], 40.00th=[12518], 50.00th=[12780], 60.00th=[13173], 00:28:25.000 | 70.00th=[13698], 80.00th=[14877], 90.00th=[16909], 95.00th=[17171], 00:28:25.000 | 99.00th=[20055], 99.50th=[21365], 99.90th=[22938], 99.95th=[22938], 00:28:25.000 | 99.99th=[22938] 00:28:25.000 bw ( KiB/s): min=19232, max=19296, per=30.55%, avg=19264.00, stdev=45.25, samples=2 00:28:25.000 iops : min= 4808, max= 4824, avg=4816.00, stdev=11.31, samples=2 00:28:25.000 lat (msec) : 10=3.42%, 20=95.53%, 50=1.05% 00:28:25.000 cpu : usr=4.58%, sys=13.83%, ctx=517, majf=0, minf=7 00:28:25.000 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:28:25.000 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:25.000 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:25.000 issued rwts: total=4608,4943,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:25.000 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:25.000 job3: (groupid=0, jobs=1): err= 0: pid=105593: Wed Nov 20 11:41:10 2024 00:28:25.000 read: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec) 00:28:25.000 slat (usec): min=6, max=3756, avg=104.30, stdev=423.44 00:28:25.000 clat (usec): min=9883, max=17750, avg=13467.82, stdev=979.54 00:28:25.000 lat (usec): min=10587, max=18013, avg=13572.11, stdev=931.67 00:28:25.000 clat percentiles (usec): 00:28:25.000 | 1.00th=[10683], 5.00th=[11863], 10.00th=[12518], 20.00th=[12780], 00:28:25.000 | 30.00th=[13042], 40.00th=[13173], 50.00th=[13435], 60.00th=[13698], 00:28:25.000 | 70.00th=[14091], 80.00th=[14353], 90.00th=[14484], 95.00th=[14746], 00:28:25.000 | 99.00th=[15926], 99.50th=[16188], 99.90th=[17695], 99.95th=[17695], 00:28:25.000 | 99.99th=[17695] 00:28:25.000 write: IOPS=4868, BW=19.0MiB/s (19.9MB/s)(19.1MiB/1004msec); 0 zone resets 00:28:25.000 slat (usec): min=10, max=3011, avg=98.50, stdev=341.58 00:28:25.000 clat (usec): min=2603, max=18328, avg=13244.37, stdev=1681.94 00:28:25.000 lat (usec): min=3258, max=18413, avg=13342.87, stdev=1679.20 00:28:25.000 clat percentiles (usec): 00:28:25.000 | 1.00th=[ 7308], 5.00th=[10945], 10.00th=[11207], 20.00th=[11600], 00:28:25.000 | 30.00th=[12649], 40.00th=[13173], 50.00th=[13566], 60.00th=[13829], 00:28:25.001 | 70.00th=[14222], 80.00th=[14484], 90.00th=[14746], 95.00th=[15401], 00:28:25.001 | 99.00th=[17171], 99.50th=[17695], 99.90th=[17957], 99.95th=[17957], 00:28:25.001 | 99.99th=[18220] 00:28:25.001 bw ( KiB/s): min=17608, max=20439, per=30.17%, avg=19023.50, stdev=2001.82, samples=2 00:28:25.001 iops : min= 4402, max= 5109, avg=4755.50, stdev=499.92, samples=2 00:28:25.001 lat (msec) : 4=0.14%, 10=0.64%, 20=99.22% 00:28:25.001 cpu : usr=4.59%, sys=13.26%, ctx=669, majf=0, minf=12 00:28:25.001 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:28:25.001 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:25.001 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:25.001 issued rwts: total=4608,4888,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:25.001 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:25.001 00:28:25.001 Run status group 0 (all jobs): 00:28:25.001 READ: bw=55.7MiB/s (58.4MB/s), 9.94MiB/s-17.9MiB/s (10.4MB/s-18.8MB/s), io=56.0MiB (58.7MB), run=1004-1006msec 00:28:25.001 WRITE: bw=61.6MiB/s (64.6MB/s), 11.6MiB/s-19.2MiB/s (12.1MB/s-20.1MB/s), io=61.9MiB (64.9MB), run=1004-1006msec 00:28:25.001 00:28:25.001 Disk stats (read/write): 00:28:25.001 nvme0n1: ios=2276/2560, merge=0/0, ticks=12441/13134, in_queue=25575, util=89.17% 00:28:25.001 nvme0n2: ios=2210/2560, merge=0/0, ticks=11894/13267, in_queue=25161, util=89.79% 00:28:25.001 nvme0n3: ios=3990/4096, merge=0/0, ticks=25417/24093, in_queue=49510, util=88.66% 00:28:25.001 nvme0n4: ios=3990/4096, merge=0/0, ticks=12708/12476, in_queue=25184, util=89.39% 00:28:25.001 11:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:28:25.001 [global] 00:28:25.001 thread=1 00:28:25.001 invalidate=1 00:28:25.001 rw=randwrite 00:28:25.001 time_based=1 00:28:25.001 runtime=1 00:28:25.001 ioengine=libaio 00:28:25.001 direct=1 00:28:25.001 bs=4096 00:28:25.001 iodepth=128 00:28:25.001 norandommap=0 00:28:25.001 numjobs=1 00:28:25.001 00:28:25.001 verify_dump=1 00:28:25.001 verify_backlog=512 00:28:25.001 verify_state_save=0 00:28:25.001 do_verify=1 00:28:25.001 verify=crc32c-intel 00:28:25.001 [job0] 00:28:25.001 filename=/dev/nvme0n1 00:28:25.001 [job1] 00:28:25.001 filename=/dev/nvme0n2 00:28:25.001 [job2] 00:28:25.001 filename=/dev/nvme0n3 00:28:25.001 [job3] 00:28:25.001 filename=/dev/nvme0n4 00:28:25.001 Could not set queue depth (nvme0n1) 00:28:25.001 Could not set queue depth (nvme0n2) 00:28:25.001 Could not set queue depth (nvme0n3) 00:28:25.001 Could not set queue depth (nvme0n4) 00:28:25.001 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:28:25.001 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:28:25.001 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:28:25.001 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:28:25.001 fio-3.35 00:28:25.001 Starting 4 threads 00:28:26.377 00:28:26.377 job0: (groupid=0, jobs=1): err= 0: pid=105646: Wed Nov 20 11:41:11 2024 00:28:26.377 read: IOPS=2587, BW=10.1MiB/s (10.6MB/s)(10.2MiB/1006msec) 00:28:26.377 slat (usec): min=4, max=7168, avg=178.44, stdev=717.10 00:28:26.377 clat (usec): min=3918, max=31488, avg=22312.43, stdev=2896.48 00:28:26.377 lat (usec): min=9566, max=31502, avg=22490.87, stdev=2913.13 00:28:26.377 clat percentiles (usec): 00:28:26.377 | 1.00th=[15795], 5.00th=[18482], 10.00th=[19006], 20.00th=[20055], 00:28:26.377 | 30.00th=[20579], 40.00th=[21365], 50.00th=[21890], 60.00th=[22676], 00:28:26.377 | 70.00th=[23725], 80.00th=[24773], 90.00th=[26346], 95.00th=[27395], 00:28:26.377 | 99.00th=[28967], 99.50th=[29230], 99.90th=[31589], 99.95th=[31589], 00:28:26.377 | 99.99th=[31589] 00:28:26.377 write: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec); 0 zone resets 00:28:26.377 slat (usec): min=4, max=6915, avg=166.92, stdev=547.04 00:28:26.377 clat (usec): min=14658, max=34011, avg=22446.23, stdev=2638.62 00:28:26.377 lat (usec): min=14894, max=34038, avg=22613.14, stdev=2674.06 00:28:26.377 clat percentiles (usec): 00:28:26.377 | 1.00th=[15533], 5.00th=[17171], 10.00th=[18744], 20.00th=[20841], 00:28:26.377 | 30.00th=[21890], 40.00th=[22414], 50.00th=[22676], 60.00th=[23200], 00:28:26.377 | 70.00th=[23462], 80.00th=[24249], 90.00th=[25035], 95.00th=[25822], 00:28:26.377 | 99.00th=[30802], 99.50th=[32113], 99.90th=[33817], 99.95th=[33817], 00:28:26.377 | 99.99th=[33817] 00:28:26.377 bw ( KiB/s): min=11616, max=12288, per=19.15%, avg=11952.00, stdev=475.18, samples=2 00:28:26.377 iops : min= 2904, max= 3072, avg=2988.00, stdev=118.79, samples=2 00:28:26.377 lat (msec) : 4=0.02%, 10=0.02%, 20=17.97%, 50=81.99% 00:28:26.377 cpu : usr=2.79%, sys=8.86%, ctx=988, majf=0, minf=13 00:28:26.377 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:28:26.377 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:26.377 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:26.377 issued rwts: total=2603,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:26.377 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:26.377 job1: (groupid=0, jobs=1): err= 0: pid=105647: Wed Nov 20 11:41:11 2024 00:28:26.377 read: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec) 00:28:26.377 slat (usec): min=3, max=7519, avg=184.25, stdev=776.96 00:28:26.377 clat (usec): min=16508, max=31350, avg=22728.82, stdev=2476.53 00:28:26.377 lat (usec): min=16523, max=31367, avg=22913.08, stdev=2505.88 00:28:26.377 clat percentiles (usec): 00:28:26.377 | 1.00th=[17695], 5.00th=[19006], 10.00th=[19530], 20.00th=[20317], 00:28:26.377 | 30.00th=[21103], 40.00th=[21890], 50.00th=[22414], 60.00th=[23462], 00:28:26.377 | 70.00th=[24511], 80.00th=[25035], 90.00th=[25822], 95.00th=[26870], 00:28:26.377 | 99.00th=[28443], 99.50th=[30016], 99.90th=[31327], 99.95th=[31327], 00:28:26.377 | 99.99th=[31327] 00:28:26.377 write: IOPS=3022, BW=11.8MiB/s (12.4MB/s)(11.8MiB/1003msec); 0 zone resets 00:28:26.377 slat (usec): min=5, max=6927, avg=166.38, stdev=489.08 00:28:26.377 clat (usec): min=2715, max=28834, avg=22508.58, stdev=2865.81 00:28:26.377 lat (usec): min=2740, max=29448, avg=22674.96, stdev=2895.13 00:28:26.377 clat percentiles (usec): 00:28:26.377 | 1.00th=[ 7046], 5.00th=[18482], 10.00th=[20317], 20.00th=[21890], 00:28:26.377 | 30.00th=[22152], 40.00th=[22676], 50.00th=[22676], 60.00th=[22938], 00:28:26.377 | 70.00th=[23462], 80.00th=[23987], 90.00th=[24773], 95.00th=[25822], 00:28:26.377 | 99.00th=[27132], 99.50th=[27919], 99.90th=[28705], 99.95th=[28705], 00:28:26.377 | 99.99th=[28705] 00:28:26.377 bw ( KiB/s): min=10952, max=12263, per=18.59%, avg=11607.50, stdev=927.02, samples=2 00:28:26.377 iops : min= 2738, max= 3065, avg=2901.50, stdev=231.22, samples=2 00:28:26.377 lat (msec) : 4=0.41%, 10=0.27%, 20=11.14%, 50=88.18% 00:28:26.377 cpu : usr=2.79%, sys=8.98%, ctx=956, majf=0, minf=7 00:28:26.377 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:28:26.377 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:26.377 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:26.377 issued rwts: total=2560,3032,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:26.377 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:26.377 job2: (groupid=0, jobs=1): err= 0: pid=105648: Wed Nov 20 11:41:11 2024 00:28:26.377 read: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec) 00:28:26.377 slat (usec): min=6, max=4533, avg=116.82, stdev=545.39 00:28:26.377 clat (usec): min=9953, max=21607, avg=14899.33, stdev=1825.47 00:28:26.377 lat (usec): min=9983, max=21618, avg=15016.15, stdev=1849.74 00:28:26.377 clat percentiles (usec): 00:28:26.377 | 1.00th=[11207], 5.00th=[11863], 10.00th=[12518], 20.00th=[13304], 00:28:26.377 | 30.00th=[13960], 40.00th=[14484], 50.00th=[15008], 60.00th=[15270], 00:28:26.377 | 70.00th=[15664], 80.00th=[16057], 90.00th=[17433], 95.00th=[17957], 00:28:26.377 | 99.00th=[19792], 99.50th=[20055], 99.90th=[21365], 99.95th=[21627], 00:28:26.377 | 99.99th=[21627] 00:28:26.377 write: IOPS=4493, BW=17.6MiB/s (18.4MB/s)(17.6MiB/1003msec); 0 zone resets 00:28:26.377 slat (usec): min=11, max=4872, avg=108.31, stdev=414.22 00:28:26.377 clat (usec): min=501, max=21957, avg=14541.76, stdev=1922.65 00:28:26.377 lat (usec): min=4564, max=22021, avg=14650.07, stdev=1919.98 00:28:26.377 clat percentiles (usec): 00:28:26.377 | 1.00th=[ 9241], 5.00th=[10945], 10.00th=[12518], 20.00th=[13435], 00:28:26.377 | 30.00th=[14091], 40.00th=[14353], 50.00th=[14877], 60.00th=[15139], 00:28:26.378 | 70.00th=[15270], 80.00th=[15795], 90.00th=[16581], 95.00th=[17171], 00:28:26.378 | 99.00th=[18744], 99.50th=[19268], 99.90th=[20317], 99.95th=[21103], 00:28:26.378 | 99.99th=[21890] 00:28:26.378 bw ( KiB/s): min=17288, max=17779, per=28.09%, avg=17533.50, stdev=347.19, samples=2 00:28:26.378 iops : min= 4322, max= 4444, avg=4383.00, stdev=86.27, samples=2 00:28:26.378 lat (usec) : 750=0.01% 00:28:26.378 lat (msec) : 10=1.53%, 20=97.97%, 50=0.49% 00:28:26.378 cpu : usr=4.19%, sys=12.28%, ctx=570, majf=0, minf=19 00:28:26.378 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:28:26.378 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:26.378 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:26.378 issued rwts: total=4096,4507,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:26.378 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:26.378 job3: (groupid=0, jobs=1): err= 0: pid=105649: Wed Nov 20 11:41:11 2024 00:28:26.378 read: IOPS=4902, BW=19.2MiB/s (20.1MB/s)(19.3MiB/1008msec) 00:28:26.378 slat (usec): min=4, max=11466, avg=100.07, stdev=661.95 00:28:26.378 clat (usec): min=1664, max=25846, avg=13164.59, stdev=3402.49 00:28:26.378 lat (usec): min=4590, max=25864, avg=13264.66, stdev=3424.98 00:28:26.378 clat percentiles (usec): 00:28:26.378 | 1.00th=[ 7111], 5.00th=[ 8291], 10.00th=[ 9634], 20.00th=[10290], 00:28:26.378 | 30.00th=[11469], 40.00th=[12256], 50.00th=[12780], 60.00th=[13042], 00:28:26.378 | 70.00th=[13960], 80.00th=[15270], 90.00th=[17695], 95.00th=[20055], 00:28:26.378 | 99.00th=[23725], 99.50th=[25035], 99.90th=[25822], 99.95th=[25822], 00:28:26.378 | 99.99th=[25822] 00:28:26.378 write: IOPS=5079, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1008msec); 0 zone resets 00:28:26.378 slat (usec): min=3, max=10247, avg=92.32, stdev=616.45 00:28:26.378 clat (usec): min=4072, max=25759, avg=12220.37, stdev=2301.02 00:28:26.378 lat (usec): min=4095, max=25767, avg=12312.70, stdev=2366.19 00:28:26.378 clat percentiles (usec): 00:28:26.378 | 1.00th=[ 5604], 5.00th=[ 7570], 10.00th=[ 8979], 20.00th=[11076], 00:28:26.378 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12518], 60.00th=[13042], 00:28:26.378 | 70.00th=[13173], 80.00th=[13304], 90.00th=[13698], 95.00th=[16450], 00:28:26.378 | 99.00th=[18220], 99.50th=[19006], 99.90th=[23462], 99.95th=[23987], 00:28:26.378 | 99.99th=[25822] 00:28:26.378 bw ( KiB/s): min=20480, max=20521, per=32.84%, avg=20500.50, stdev=28.99, samples=2 00:28:26.378 iops : min= 5120, max= 5130, avg=5125.00, stdev= 7.07, samples=2 00:28:26.378 lat (msec) : 2=0.01%, 10=14.48%, 20=82.77%, 50=2.74% 00:28:26.378 cpu : usr=4.07%, sys=12.81%, ctx=507, majf=0, minf=12 00:28:26.378 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:28:26.378 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:26.378 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:26.378 issued rwts: total=4942,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:26.378 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:26.378 00:28:26.378 Run status group 0 (all jobs): 00:28:26.378 READ: bw=55.0MiB/s (57.7MB/s), 9.97MiB/s-19.2MiB/s (10.5MB/s-20.1MB/s), io=55.5MiB (58.2MB), run=1003-1008msec 00:28:26.378 WRITE: bw=61.0MiB/s (63.9MB/s), 11.8MiB/s-19.8MiB/s (12.4MB/s-20.8MB/s), io=61.4MiB (64.4MB), run=1003-1008msec 00:28:26.378 00:28:26.378 Disk stats (read/write): 00:28:26.378 nvme0n1: ios=2413/2560, merge=0/0, ticks=16998/17263, in_queue=34261, util=90.67% 00:28:26.378 nvme0n2: ios=2331/2560, merge=0/0, ticks=16522/17710, in_queue=34232, util=89.47% 00:28:26.378 nvme0n3: ios=3584/3895, merge=0/0, ticks=16719/17143, in_queue=33862, util=89.19% 00:28:26.378 nvme0n4: ios=4096/4457, merge=0/0, ticks=51079/51567, in_queue=102646, util=89.65% 00:28:26.378 11:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:28:26.378 11:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=105662 00:28:26.378 11:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:28:26.378 11:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:28:26.378 [global] 00:28:26.378 thread=1 00:28:26.378 invalidate=1 00:28:26.378 rw=read 00:28:26.378 time_based=1 00:28:26.378 runtime=10 00:28:26.378 ioengine=libaio 00:28:26.378 direct=1 00:28:26.378 bs=4096 00:28:26.378 iodepth=1 00:28:26.378 norandommap=1 00:28:26.378 numjobs=1 00:28:26.378 00:28:26.378 [job0] 00:28:26.378 filename=/dev/nvme0n1 00:28:26.378 [job1] 00:28:26.378 filename=/dev/nvme0n2 00:28:26.378 [job2] 00:28:26.378 filename=/dev/nvme0n3 00:28:26.378 [job3] 00:28:26.378 filename=/dev/nvme0n4 00:28:26.378 Could not set queue depth (nvme0n1) 00:28:26.378 Could not set queue depth (nvme0n2) 00:28:26.378 Could not set queue depth (nvme0n3) 00:28:26.378 Could not set queue depth (nvme0n4) 00:28:26.378 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:28:26.378 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:28:26.378 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:28:26.378 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:28:26.378 fio-3.35 00:28:26.378 Starting 4 threads 00:28:29.664 11:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:28:29.664 fio: pid=105706, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:28:29.664 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=36126720, buflen=4096 00:28:29.664 11:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:28:29.922 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=53243904, buflen=4096 00:28:29.922 fio: pid=105705, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:28:29.922 11:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:28:29.922 11:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:28:30.180 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=65826816, buflen=4096 00:28:30.180 fio: pid=105702, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:28:30.180 11:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:28:30.180 11:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:28:30.747 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=51941376, buflen=4096 00:28:30.747 fio: pid=105703, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:28:30.747 11:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:28:30.747 11:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:28:30.747 00:28:30.747 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=105702: Wed Nov 20 11:41:16 2024 00:28:30.747 read: IOPS=4322, BW=16.9MiB/s (17.7MB/s)(62.8MiB/3718msec) 00:28:30.747 slat (usec): min=12, max=15382, avg=21.43, stdev=199.01 00:28:30.747 clat (usec): min=154, max=2976, avg=208.25, stdev=54.61 00:28:30.747 lat (usec): min=171, max=15694, avg=229.68, stdev=208.12 00:28:30.747 clat percentiles (usec): 00:28:30.747 | 1.00th=[ 167], 5.00th=[ 174], 10.00th=[ 176], 20.00th=[ 182], 00:28:30.747 | 30.00th=[ 186], 40.00th=[ 190], 50.00th=[ 194], 60.00th=[ 202], 00:28:30.747 | 70.00th=[ 223], 80.00th=[ 241], 90.00th=[ 255], 95.00th=[ 265], 00:28:30.747 | 99.00th=[ 289], 99.50th=[ 338], 99.90th=[ 635], 99.95th=[ 996], 00:28:30.747 | 99.99th=[ 2933] 00:28:30.747 bw ( KiB/s): min=13873, max=19272, per=35.26%, avg=17316.71, stdev=1688.67, samples=7 00:28:30.747 iops : min= 3468, max= 4818, avg=4328.86, stdev=422.08, samples=7 00:28:30.747 lat (usec) : 250=86.96%, 500=12.90%, 750=0.05%, 1000=0.03% 00:28:30.747 lat (msec) : 2=0.03%, 4=0.02% 00:28:30.747 cpu : usr=1.26%, sys=6.51%, ctx=16085, majf=0, minf=1 00:28:30.747 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:30.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:30.747 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:30.747 issued rwts: total=16072,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:30.747 latency : target=0, window=0, percentile=100.00%, depth=1 00:28:30.747 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=105703: Wed Nov 20 11:41:16 2024 00:28:30.747 read: IOPS=3078, BW=12.0MiB/s (12.6MB/s)(49.5MiB/4119msec) 00:28:30.747 slat (usec): min=9, max=18883, avg=21.44, stdev=240.37 00:28:30.747 clat (usec): min=151, max=137104, avg=301.62, stdev=1237.05 00:28:30.747 lat (usec): min=166, max=137117, avg=323.06, stdev=1260.17 00:28:30.747 clat percentiles (usec): 00:28:30.747 | 1.00th=[ 163], 5.00th=[ 172], 10.00th=[ 180], 20.00th=[ 204], 00:28:30.747 | 30.00th=[ 269], 40.00th=[ 285], 50.00th=[ 293], 60.00th=[ 302], 00:28:30.747 | 70.00th=[ 310], 80.00th=[ 338], 90.00th=[ 388], 95.00th=[ 408], 00:28:30.747 | 99.00th=[ 453], 99.50th=[ 474], 99.90th=[ 906], 99.95th=[ 3064], 00:28:30.747 | 99.99th=[23725] 00:28:30.747 bw ( KiB/s): min=10944, max=15306, per=24.93%, avg=12241.25, stdev=1348.96, samples=8 00:28:30.747 iops : min= 2736, max= 3826, avg=3060.25, stdev=337.08, samples=8 00:28:30.747 lat (usec) : 250=27.73%, 500=71.94%, 750=0.20%, 1000=0.02% 00:28:30.747 lat (msec) : 2=0.04%, 4=0.02%, 10=0.02%, 50=0.01%, 250=0.01% 00:28:30.747 cpu : usr=0.92%, sys=4.32%, ctx=12703, majf=0, minf=1 00:28:30.747 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:30.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:30.747 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:30.747 issued rwts: total=12682,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:30.747 latency : target=0, window=0, percentile=100.00%, depth=1 00:28:30.747 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=105705: Wed Nov 20 11:41:16 2024 00:28:30.747 read: IOPS=3863, BW=15.1MiB/s (15.8MB/s)(50.8MiB/3365msec) 00:28:30.747 slat (usec): min=12, max=13154, avg=24.33, stdev=138.29 00:28:30.747 clat (usec): min=160, max=13809, avg=232.58, stdev=136.53 00:28:30.747 lat (usec): min=193, max=13823, avg=256.91, stdev=195.30 00:28:30.747 clat percentiles (usec): 00:28:30.747 | 1.00th=[ 192], 5.00th=[ 198], 10.00th=[ 202], 20.00th=[ 206], 00:28:30.747 | 30.00th=[ 210], 40.00th=[ 215], 50.00th=[ 219], 60.00th=[ 223], 00:28:30.747 | 70.00th=[ 229], 80.00th=[ 239], 90.00th=[ 289], 95.00th=[ 322], 00:28:30.747 | 99.00th=[ 371], 99.50th=[ 424], 99.90th=[ 627], 99.95th=[ 1188], 00:28:30.747 | 99.99th=[ 4359] 00:28:30.747 bw ( KiB/s): min=13408, max=17352, per=31.55%, avg=15496.00, stdev=1760.62, samples=6 00:28:30.747 iops : min= 3352, max= 4338, avg=3874.00, stdev=440.15, samples=6 00:28:30.747 lat (usec) : 250=85.26%, 500=14.39%, 750=0.26%, 1000=0.01% 00:28:30.747 lat (msec) : 2=0.04%, 4=0.02%, 10=0.01%, 20=0.01% 00:28:30.747 cpu : usr=1.31%, sys=7.22%, ctx=13005, majf=0, minf=2 00:28:30.747 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:30.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:30.747 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:30.747 issued rwts: total=13000,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:30.747 latency : target=0, window=0, percentile=100.00%, depth=1 00:28:30.747 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=105706: Wed Nov 20 11:41:16 2024 00:28:30.747 read: IOPS=2918, BW=11.4MiB/s (12.0MB/s)(34.5MiB/3022msec) 00:28:30.747 slat (nsec): min=10654, max=88259, avg=15822.33, stdev=4321.91 00:28:30.747 clat (usec): min=179, max=1715, avg=324.99, stdev=50.37 00:28:30.747 lat (usec): min=191, max=1726, avg=340.81, stdev=50.19 00:28:30.747 clat percentiles (usec): 00:28:30.747 | 1.00th=[ 269], 5.00th=[ 277], 10.00th=[ 285], 20.00th=[ 289], 00:28:30.747 | 30.00th=[ 293], 40.00th=[ 302], 50.00th=[ 306], 60.00th=[ 314], 00:28:30.747 | 70.00th=[ 330], 80.00th=[ 375], 90.00th=[ 400], 95.00th=[ 416], 00:28:30.747 | 99.00th=[ 461], 99.50th=[ 474], 99.90th=[ 594], 99.95th=[ 611], 00:28:30.747 | 99.99th=[ 1713] 00:28:30.747 bw ( KiB/s): min=10944, max=12384, per=23.82%, avg=11700.00, stdev=555.45, samples=6 00:28:30.747 iops : min= 2736, max= 3096, avg=2925.00, stdev=138.86, samples=6 00:28:30.747 lat (usec) : 250=0.23%, 500=99.41%, 750=0.33% 00:28:30.747 lat (msec) : 2=0.02% 00:28:30.747 cpu : usr=0.83%, sys=4.47%, ctx=8833, majf=0, minf=2 00:28:30.747 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:30.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:30.747 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:30.747 issued rwts: total=8821,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:30.747 latency : target=0, window=0, percentile=100.00%, depth=1 00:28:30.747 00:28:30.747 Run status group 0 (all jobs): 00:28:30.747 READ: bw=48.0MiB/s (50.3MB/s), 11.4MiB/s-16.9MiB/s (12.0MB/s-17.7MB/s), io=198MiB (207MB), run=3022-4119msec 00:28:30.747 00:28:30.747 Disk stats (read/write): 00:28:30.748 nvme0n1: ios=15498/0, merge=0/0, ticks=3278/0, in_queue=3278, util=94.79% 00:28:30.748 nvme0n2: ios=12621/0, merge=0/0, ticks=3826/0, in_queue=3826, util=95.42% 00:28:30.748 nvme0n3: ios=12957/0, merge=0/0, ticks=3064/0, in_queue=3064, util=96.13% 00:28:30.748 nvme0n4: ios=8349/0, merge=0/0, ticks=2660/0, in_queue=2660, util=96.71% 00:28:31.006 11:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:28:31.006 11:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:28:31.265 11:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:28:31.265 11:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:28:31.523 11:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:28:31.523 11:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:28:32.089 11:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:28:32.089 11:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:28:32.348 11:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:28:32.348 11:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 105662 00:28:32.348 11:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:28:32.348 11:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:28:32.348 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:28:32.348 11:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:28:32.348 11:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:28:32.348 11:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:32.348 11:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:28:32.348 11:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:28:32.348 11:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:32.348 nvmf hotplug test: fio failed as expected 00:28:32.348 11:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:28:32.348 11:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:28:32.348 11:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:28:32.348 11:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:32.607 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:28:32.607 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:28:32.607 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:28:32.607 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:28:32.608 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:28:32.608 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:32.608 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:28:32.608 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:32.608 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:28:32.608 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:32.608 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:32.608 rmmod nvme_tcp 00:28:32.608 rmmod nvme_fabrics 00:28:32.608 rmmod nvme_keyring 00:28:32.608 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:32.608 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:28:32.608 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:28:32.608 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 105184 ']' 00:28:32.608 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 105184 00:28:32.608 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 105184 ']' 00:28:32.608 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 105184 00:28:32.608 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:28:32.608 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:32.608 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 105184 00:28:32.608 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:32.608 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:32.608 killing process with pid 105184 00:28:32.608 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 105184' 00:28:32.608 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 105184 00:28:32.608 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 105184 00:28:32.866 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:32.866 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:32.866 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:32.866 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:28:32.866 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:28:32.866 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:32.866 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:28:32.866 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:32.866 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:28:32.866 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:28:32.866 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:28:32.866 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:28:32.866 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:28:32.866 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:28:32.866 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:28:32.866 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:28:32.866 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:28:32.866 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:28:32.866 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:28:33.129 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:28:33.129 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:33.129 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:33.129 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:28:33.129 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:33.129 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:33.129 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:33.129 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:28:33.129 00:28:33.129 real 0m20.642s 00:28:33.129 user 1m1.612s 00:28:33.129 sys 0m12.856s 00:28:33.129 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:33.129 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:28:33.129 ************************************ 00:28:33.129 END TEST nvmf_fio_target 00:28:33.129 ************************************ 00:28:33.129 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:28:33.129 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:33.129 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:33.129 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:33.129 ************************************ 00:28:33.129 START TEST nvmf_bdevio 00:28:33.129 ************************************ 00:28:33.129 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:28:33.129 * Looking for test storage... 00:28:33.129 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:28:33.129 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:33.129 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:33.129 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:28:33.390 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:33.390 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:33.390 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:33.390 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:33.390 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:28:33.390 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:28:33.390 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:28:33.390 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:28:33.390 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:28:33.390 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:28:33.390 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:28:33.390 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:33.390 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:28:33.390 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:28:33.390 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:33.390 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:33.390 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:28:33.390 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:28:33.390 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:33.390 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:28:33.390 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:28:33.390 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:28:33.390 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:28:33.390 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:33.390 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:28:33.390 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:28:33.390 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:33.390 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:33.390 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:28:33.390 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:33.390 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:33.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.390 --rc genhtml_branch_coverage=1 00:28:33.390 --rc genhtml_function_coverage=1 00:28:33.390 --rc genhtml_legend=1 00:28:33.390 --rc geninfo_all_blocks=1 00:28:33.390 --rc geninfo_unexecuted_blocks=1 00:28:33.390 00:28:33.390 ' 00:28:33.390 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:33.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.390 --rc genhtml_branch_coverage=1 00:28:33.390 --rc genhtml_function_coverage=1 00:28:33.390 --rc genhtml_legend=1 00:28:33.390 --rc geninfo_all_blocks=1 00:28:33.390 --rc geninfo_unexecuted_blocks=1 00:28:33.390 00:28:33.390 ' 00:28:33.390 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:33.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.390 --rc genhtml_branch_coverage=1 00:28:33.390 --rc genhtml_function_coverage=1 00:28:33.390 --rc genhtml_legend=1 00:28:33.390 --rc geninfo_all_blocks=1 00:28:33.390 --rc geninfo_unexecuted_blocks=1 00:28:33.390 00:28:33.390 ' 00:28:33.390 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:33.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.390 --rc genhtml_branch_coverage=1 00:28:33.390 --rc genhtml_function_coverage=1 00:28:33.390 --rc genhtml_legend=1 00:28:33.390 --rc geninfo_all_blocks=1 00:28:33.390 --rc geninfo_unexecuted_blocks=1 00:28:33.390 00:28:33.390 ' 00:28:33.390 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:33.390 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:28:33.390 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:33.390 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:33.390 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:33.390 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:33.390 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:33.390 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:33.390 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:33.390 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:33.390 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:33.390 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:33.390 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:28:33.390 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=806d442c-08ba-4719-b76d-33169283459a 00:28:33.390 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:33.390 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:33.390 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:33.390 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:33.390 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:33.390 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:28:33.390 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:33.390 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:33.390 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:33.390 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.390 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.390 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.390 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:28:33.390 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.390 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:28:33.390 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:33.390 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:33.391 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:33.391 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:33.391 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:33.391 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:33.391 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:33.391 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:33.391 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:33.391 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:33.391 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:33.391 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:33.391 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:28:33.391 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:33.391 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:33.391 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:33.391 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:33.391 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:33.391 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:33.391 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:33.391 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:33.391 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:28:33.391 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:28:33.391 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:28:33.391 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:28:33.391 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:28:33.391 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:28:33.391 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:33.391 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:28:33.391 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:28:33.391 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:28:33.391 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:33.391 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:28:33.391 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:33.391 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:28:33.391 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:33.391 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:28:33.391 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:33.391 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:33.391 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:33.391 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:33.391 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:33.391 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:33.391 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:28:33.391 Cannot find device "nvmf_init_br" 00:28:33.391 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:28:33.391 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:28:33.391 Cannot find device "nvmf_init_br2" 00:28:33.391 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:28:33.391 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:28:33.391 Cannot find device "nvmf_tgt_br" 00:28:33.391 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:28:33.391 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:28:33.391 Cannot find device "nvmf_tgt_br2" 00:28:33.391 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:28:33.391 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:28:33.391 Cannot find device "nvmf_init_br" 00:28:33.391 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:28:33.391 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:28:33.391 Cannot find device "nvmf_init_br2" 00:28:33.391 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:28:33.391 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:28:33.391 Cannot find device "nvmf_tgt_br" 00:28:33.391 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:28:33.391 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:28:33.391 Cannot find device "nvmf_tgt_br2" 00:28:33.391 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:28:33.391 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:28:33.391 Cannot find device "nvmf_br" 00:28:33.391 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:28:33.391 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:28:33.391 Cannot find device "nvmf_init_if" 00:28:33.391 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:28:33.391 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:28:33.391 Cannot find device "nvmf_init_if2" 00:28:33.391 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:28:33.391 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:33.391 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:33.391 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:28:33.391 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:33.391 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:33.391 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:28:33.391 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:28:33.391 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:33.391 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:28:33.650 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:33.650 11:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:33.650 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:33.650 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:33.650 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:33.650 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:28:33.650 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:28:33.650 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:28:33.650 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:28:33.650 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:28:33.650 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:28:33.650 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:28:33.650 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:28:33.650 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:28:33.650 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:33.650 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:33.650 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:33.650 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:28:33.650 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:28:33.650 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:28:33.650 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:28:33.650 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:33.650 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:33.650 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:33.650 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:28:33.650 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:28:33.651 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:28:33.651 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:33.651 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:28:33.651 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:28:33.651 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:33.651 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 00:28:33.651 00:28:33.651 --- 10.0.0.3 ping statistics --- 00:28:33.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:33.651 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:28:33.651 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:28:33.651 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:28:33.651 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:28:33.651 00:28:33.651 --- 10.0.0.4 ping statistics --- 00:28:33.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:33.651 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:28:33.651 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:33.651 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:33.651 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:28:33.651 00:28:33.651 --- 10.0.0.1 ping statistics --- 00:28:33.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:33.651 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:28:33.651 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:28:33.651 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:33.651 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:28:33.651 00:28:33.651 --- 10.0.0.2 ping statistics --- 00:28:33.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:33.651 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:28:33.651 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:33.651 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:28:33.651 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:33.651 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:33.651 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:33.651 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:33.651 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:33.651 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:33.651 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:33.910 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:28:33.910 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:33.910 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:33.910 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:28:33.910 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=106091 00:28:33.910 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 106091 00:28:33.910 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:28:33.910 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 106091 ']' 00:28:33.910 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:33.910 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:33.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:33.910 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:33.910 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:33.910 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:28:33.910 [2024-11-20 11:41:19.309645] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:33.910 [2024-11-20 11:41:19.310953] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:28:33.910 [2024-11-20 11:41:19.311036] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:33.910 [2024-11-20 11:41:19.464363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:34.169 [2024-11-20 11:41:19.505642] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:34.169 [2024-11-20 11:41:19.505735] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:34.169 [2024-11-20 11:41:19.505761] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:34.169 [2024-11-20 11:41:19.505778] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:34.169 [2024-11-20 11:41:19.505792] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:34.170 [2024-11-20 11:41:19.506930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:34.170 [2024-11-20 11:41:19.507033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:28:34.170 [2024-11-20 11:41:19.507513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:28:34.170 [2024-11-20 11:41:19.507534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:34.170 [2024-11-20 11:41:19.575703] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:34.170 [2024-11-20 11:41:19.575972] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:34.170 [2024-11-20 11:41:19.576045] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:34.170 [2024-11-20 11:41:19.576413] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:28:34.170 [2024-11-20 11:41:19.576778] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:34.170 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:34.170 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:28:34.170 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:34.170 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:34.170 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:28:34.170 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:34.170 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:34.170 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.170 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:28:34.170 [2024-11-20 11:41:19.656742] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:34.170 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.170 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:34.170 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.170 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:28:34.170 Malloc0 00:28:34.170 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.170 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:34.170 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.170 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:28:34.170 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.170 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:34.170 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.170 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:28:34.170 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.170 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:28:34.170 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.170 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:28:34.170 [2024-11-20 11:41:19.720934] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:28:34.170 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.170 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:28:34.170 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:28:34.170 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:28:34.170 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:28:34.170 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:34.170 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:34.170 { 00:28:34.170 "params": { 00:28:34.170 "name": "Nvme$subsystem", 00:28:34.170 "trtype": "$TEST_TRANSPORT", 00:28:34.170 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:34.170 "adrfam": "ipv4", 00:28:34.170 "trsvcid": "$NVMF_PORT", 00:28:34.170 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:34.170 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:34.170 "hdgst": ${hdgst:-false}, 00:28:34.170 "ddgst": ${ddgst:-false} 00:28:34.170 }, 00:28:34.170 "method": "bdev_nvme_attach_controller" 00:28:34.170 } 00:28:34.170 EOF 00:28:34.170 )") 00:28:34.170 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:28:34.170 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:28:34.170 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:28:34.170 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:34.170 "params": { 00:28:34.170 "name": "Nvme1", 00:28:34.170 "trtype": "tcp", 00:28:34.170 "traddr": "10.0.0.3", 00:28:34.170 "adrfam": "ipv4", 00:28:34.170 "trsvcid": "4420", 00:28:34.170 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:34.170 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:34.170 "hdgst": false, 00:28:34.170 "ddgst": false 00:28:34.170 }, 00:28:34.170 "method": "bdev_nvme_attach_controller" 00:28:34.170 }' 00:28:34.429 [2024-11-20 11:41:19.783358] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:28:34.429 [2024-11-20 11:41:19.783459] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106126 ] 00:28:34.429 [2024-11-20 11:41:19.937549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:34.429 [2024-11-20 11:41:19.980218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:34.429 [2024-11-20 11:41:19.980401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:34.429 [2024-11-20 11:41:19.980401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:34.687 I/O targets: 00:28:34.688 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:28:34.688 00:28:34.688 00:28:34.688 CUnit - A unit testing framework for C - Version 2.1-3 00:28:34.688 http://cunit.sourceforge.net/ 00:28:34.688 00:28:34.688 00:28:34.688 Suite: bdevio tests on: Nvme1n1 00:28:34.688 Test: blockdev write read block ...passed 00:28:34.688 Test: blockdev write zeroes read block ...passed 00:28:34.688 Test: blockdev write zeroes read no split ...passed 00:28:34.688 Test: blockdev write zeroes read split ...passed 00:28:34.688 Test: blockdev write zeroes read split partial ...passed 00:28:34.688 Test: blockdev reset ...[2024-11-20 11:41:20.230755] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:28:34.688 [2024-11-20 11:41:20.230893] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169af50 (9): Bad file descriptor 00:28:34.688 [2024-11-20 11:41:20.234906] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:28:34.688 passed 00:28:34.688 Test: blockdev write read 8 blocks ...passed 00:28:34.688 Test: blockdev write read size > 128k ...passed 00:28:34.688 Test: blockdev write read invalid size ...passed 00:28:34.946 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:28:34.946 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:28:34.946 Test: blockdev write read max offset ...passed 00:28:34.946 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:28:34.946 Test: blockdev writev readv 8 blocks ...passed 00:28:34.946 Test: blockdev writev readv 30 x 1block ...passed 00:28:34.946 Test: blockdev writev readv block ...passed 00:28:34.946 Test: blockdev writev readv size > 128k ...passed 00:28:34.946 Test: blockdev writev readv size > 128k in two iovs ...passed 00:28:34.946 Test: blockdev comparev and writev ...[2024-11-20 11:41:20.412640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:28:34.946 [2024-11-20 11:41:20.412691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:34.946 [2024-11-20 11:41:20.412712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:28:34.946 [2024-11-20 11:41:20.412724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.946 [2024-11-20 11:41:20.413232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:28:34.946 [2024-11-20 11:41:20.413263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:34.946 [2024-11-20 11:41:20.413282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:28:34.946 [2024-11-20 11:41:20.413293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:34.946 [2024-11-20 11:41:20.413692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:28:34.946 [2024-11-20 11:41:20.413724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:34.946 [2024-11-20 11:41:20.413742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:28:34.946 [2024-11-20 11:41:20.413753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:34.946 [2024-11-20 11:41:20.414140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:28:34.946 [2024-11-20 11:41:20.414171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:34.946 [2024-11-20 11:41:20.414190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:28:34.946 [2024-11-20 11:41:20.414201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:34.946 passed 00:28:34.946 Test: blockdev nvme passthru rw ...passed 00:28:34.946 Test: blockdev nvme passthru vendor specific ...[2024-11-20 11:41:20.497965] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:34.946 [2024-11-20 11:41:20.498005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:34.946 [2024-11-20 11:41:20.498518] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:34.946 [2024-11-20 11:41:20.498550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:34.946 [2024-11-20 11:41:20.498808] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:34.946 [2024-11-20 11:41:20.498838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:34.946 [2024-11-20 11:41:20.499234] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:34.946 [2024-11-20 11:41:20.499264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:34.946 passed 00:28:34.946 Test: blockdev nvme admin passthru ...passed 00:28:35.204 Test: blockdev copy ...passed 00:28:35.204 00:28:35.204 Run Summary: Type Total Ran Passed Failed Inactive 00:28:35.204 suites 1 1 n/a 0 0 00:28:35.204 tests 23 23 23 0 0 00:28:35.204 asserts 152 152 152 0 n/a 00:28:35.204 00:28:35.205 Elapsed time = 0.858 seconds 00:28:35.205 11:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:35.205 11:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.205 11:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:28:35.205 11:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.205 11:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:28:35.205 11:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:28:35.205 11:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:35.205 11:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:28:35.205 11:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:35.205 11:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:28:35.205 11:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:35.205 11:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:35.205 rmmod nvme_tcp 00:28:35.205 rmmod nvme_fabrics 00:28:35.205 rmmod nvme_keyring 00:28:35.205 11:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:35.205 11:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:28:35.205 11:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:28:35.205 11:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 106091 ']' 00:28:35.205 11:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 106091 00:28:35.205 11:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 106091 ']' 00:28:35.205 11:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 106091 00:28:35.205 11:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:28:35.463 11:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:35.463 11:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 106091 00:28:35.463 11:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:28:35.463 11:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:28:35.463 11:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 106091' 00:28:35.463 killing process with pid 106091 00:28:35.463 11:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 106091 00:28:35.463 11:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 106091 00:28:35.463 11:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:35.463 11:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:35.463 11:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:35.463 11:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:28:35.463 11:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:28:35.463 11:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:35.463 11:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:28:35.463 11:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:35.463 11:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:28:35.463 11:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:28:35.463 11:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:28:35.463 11:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:28:35.721 11:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:28:35.721 11:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:28:35.721 11:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:28:35.721 11:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:28:35.721 11:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:28:35.721 11:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:28:35.721 11:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:28:35.721 11:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:28:35.721 11:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:35.721 11:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:35.721 11:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:28:35.721 11:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:35.721 11:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:35.721 11:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:35.721 11:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:28:35.721 00:28:35.721 real 0m2.664s 00:28:35.721 user 0m6.381s 00:28:35.721 sys 0m1.114s 00:28:35.721 11:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:35.721 11:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:28:35.721 ************************************ 00:28:35.721 END TEST nvmf_bdevio 00:28:35.721 ************************************ 00:28:35.721 11:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:28:35.721 00:28:35.721 real 3m32.270s 00:28:35.721 user 9m43.916s 00:28:35.721 sys 1m21.176s 00:28:35.721 11:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:35.721 11:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:35.721 ************************************ 00:28:35.721 END TEST nvmf_target_core_interrupt_mode 00:28:35.721 ************************************ 00:28:35.979 11:41:21 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /home/vagrant/spdk_repo/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:28:35.980 11:41:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:35.980 11:41:21 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:35.980 11:41:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:35.980 ************************************ 00:28:35.980 START TEST nvmf_interrupt 00:28:35.980 ************************************ 00:28:35.980 11:41:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:28:35.980 * Looking for test storage... 00:28:35.980 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:28:35.980 11:41:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:35.980 11:41:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:28:35.980 11:41:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:35.980 11:41:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:35.980 11:41:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:35.980 11:41:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:35.980 11:41:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:35.980 11:41:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:28:35.980 11:41:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:28:35.980 11:41:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:28:35.980 11:41:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:28:35.980 11:41:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:28:35.980 11:41:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:28:35.980 11:41:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:28:35.980 11:41:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:35.980 11:41:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:28:35.980 11:41:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:28:35.980 11:41:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:35.980 11:41:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:35.980 11:41:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:28:35.980 11:41:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:28:35.980 11:41:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:35.980 11:41:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:28:35.980 11:41:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:28:35.980 11:41:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:28:35.980 11:41:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:28:35.980 11:41:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:35.980 11:41:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:28:35.980 11:41:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:28:35.980 11:41:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:35.980 11:41:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:35.980 11:41:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:28:35.980 11:41:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:35.980 11:41:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:35.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:35.980 --rc genhtml_branch_coverage=1 00:28:35.980 --rc genhtml_function_coverage=1 00:28:35.980 --rc genhtml_legend=1 00:28:35.980 --rc geninfo_all_blocks=1 00:28:35.980 --rc geninfo_unexecuted_blocks=1 00:28:35.980 00:28:35.980 ' 00:28:35.980 11:41:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:35.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:35.980 --rc genhtml_branch_coverage=1 00:28:35.980 --rc genhtml_function_coverage=1 00:28:35.980 --rc genhtml_legend=1 00:28:35.980 --rc geninfo_all_blocks=1 00:28:35.980 --rc geninfo_unexecuted_blocks=1 00:28:35.980 00:28:35.980 ' 00:28:35.980 11:41:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:35.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:35.980 --rc genhtml_branch_coverage=1 00:28:35.980 --rc genhtml_function_coverage=1 00:28:35.980 --rc genhtml_legend=1 00:28:35.980 --rc geninfo_all_blocks=1 00:28:35.980 --rc geninfo_unexecuted_blocks=1 00:28:35.980 00:28:35.980 ' 00:28:35.980 11:41:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:35.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:35.980 --rc genhtml_branch_coverage=1 00:28:35.980 --rc genhtml_function_coverage=1 00:28:35.980 --rc genhtml_legend=1 00:28:35.980 --rc geninfo_all_blocks=1 00:28:35.980 --rc geninfo_unexecuted_blocks=1 00:28:35.980 00:28:35.980 ' 00:28:35.980 11:41:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:35.980 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:28:35.980 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:35.980 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:35.980 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:35.980 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:35.980 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:35.980 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:35.980 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:35.980 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:35.980 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:35.980 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:35.980 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:28:35.980 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=806d442c-08ba-4719-b76d-33169283459a 00:28:35.980 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:35.980 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:35.980 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:35.980 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:35.980 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:35.980 11:41:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:28:35.980 11:41:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:35.980 11:41:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:35.980 11:41:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:35.980 11:41:21 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:35.980 11:41:21 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:35.980 11:41:21 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:35.980 11:41:21 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:28:35.980 11:41:21 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:35.980 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:28:35.980 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:35.980 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:35.980 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:35.980 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:35.980 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:35.980 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:36.239 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:36.239 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:36.239 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:36.239 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:36.239 11:41:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/common.sh 00:28:36.239 11:41:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:28:36.239 11:41:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:28:36.239 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:36.239 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:36.239 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:36.239 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:36.239 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:36.239 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:36.239 11:41:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:36.239 11:41:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:36.239 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:28:36.239 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:28:36.239 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:28:36.239 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:28:36.239 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:28:36.239 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@460 -- # nvmf_veth_init 00:28:36.239 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:36.239 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:28:36.239 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:28:36.239 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:28:36.239 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:36.239 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:28:36.239 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:36.239 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:28:36.239 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:36.239 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:28:36.239 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:36.239 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:36.239 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:36.239 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:36.239 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:36.239 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:36.239 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:28:36.239 Cannot find device "nvmf_init_br" 00:28:36.239 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@162 -- # true 00:28:36.239 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:28:36.239 Cannot find device "nvmf_init_br2" 00:28:36.239 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@163 -- # true 00:28:36.239 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:28:36.239 Cannot find device "nvmf_tgt_br" 00:28:36.239 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@164 -- # true 00:28:36.239 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:28:36.239 Cannot find device "nvmf_tgt_br2" 00:28:36.239 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@165 -- # true 00:28:36.239 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:28:36.239 Cannot find device "nvmf_init_br" 00:28:36.239 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@166 -- # true 00:28:36.239 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:28:36.239 Cannot find device "nvmf_init_br2" 00:28:36.239 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@167 -- # true 00:28:36.239 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:28:36.239 Cannot find device "nvmf_tgt_br" 00:28:36.239 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@168 -- # true 00:28:36.239 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:28:36.239 Cannot find device "nvmf_tgt_br2" 00:28:36.239 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@169 -- # true 00:28:36.239 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:28:36.239 Cannot find device "nvmf_br" 00:28:36.239 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@170 -- # true 00:28:36.239 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:28:36.239 Cannot find device "nvmf_init_if" 00:28:36.239 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@171 -- # true 00:28:36.239 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:28:36.239 Cannot find device "nvmf_init_if2" 00:28:36.239 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@172 -- # true 00:28:36.239 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:36.239 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:36.239 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@173 -- # true 00:28:36.239 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:36.239 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:36.239 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@174 -- # true 00:28:36.239 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:28:36.239 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:36.239 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:28:36.239 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:36.239 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:36.239 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:36.239 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:36.239 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:36.239 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:28:36.239 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:28:36.239 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:28:36.239 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:28:36.239 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:28:36.239 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:28:36.239 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:28:36.239 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:28:36.498 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:28:36.498 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:36.498 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:36.498 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:36.498 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:28:36.498 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:28:36.498 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:28:36.498 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:28:36.498 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:36.498 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:36.498 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:36.498 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:28:36.498 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:28:36.498 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:28:36.498 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:36.498 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:28:36.498 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:28:36.498 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:36.498 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.128 ms 00:28:36.498 00:28:36.498 --- 10.0.0.3 ping statistics --- 00:28:36.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:36.498 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:28:36.498 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:28:36.498 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:28:36.498 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.096 ms 00:28:36.498 00:28:36.498 --- 10.0.0.4 ping statistics --- 00:28:36.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:36.498 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:28:36.498 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:36.498 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:36.498 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:28:36.498 00:28:36.498 --- 10.0.0.1 ping statistics --- 00:28:36.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:36.498 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:28:36.498 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:28:36.498 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:36.498 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:28:36.498 00:28:36.498 --- 10.0.0.2 ping statistics --- 00:28:36.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:36.498 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:28:36.498 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:36.498 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@461 -- # return 0 00:28:36.498 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:36.498 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:36.498 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:36.498 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:36.498 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:36.498 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:36.498 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:36.498 11:41:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:28:36.498 11:41:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:36.498 11:41:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:36.498 11:41:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:28:36.498 11:41:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=106379 00:28:36.498 11:41:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:28:36.498 11:41:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 106379 00:28:36.498 11:41:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 106379 ']' 00:28:36.498 11:41:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:36.498 11:41:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:36.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:36.498 11:41:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:36.498 11:41:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:36.498 11:41:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:28:36.498 [2024-11-20 11:41:22.056428] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:36.498 [2024-11-20 11:41:22.057542] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:28:36.498 [2024-11-20 11:41:22.057641] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:36.757 [2024-11-20 11:41:22.200948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:36.758 [2024-11-20 11:41:22.232787] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:36.758 [2024-11-20 11:41:22.232842] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:36.758 [2024-11-20 11:41:22.232853] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:36.758 [2024-11-20 11:41:22.232862] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:36.758 [2024-11-20 11:41:22.232869] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:36.758 [2024-11-20 11:41:22.233584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:36.758 [2024-11-20 11:41:22.233594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:36.758 [2024-11-20 11:41:22.281822] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:36.758 [2024-11-20 11:41:22.282413] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:36.758 [2024-11-20 11:41:22.282648] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:37.017 11:41:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:37.017 11:41:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:28:37.017 11:41:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:37.017 11:41:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:37.017 11:41:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:28:37.017 11:41:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:37.017 11:41:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:28:37.017 11:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:28:37.017 11:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:28:37.017 11:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:28:37.017 5000+0 records in 00:28:37.017 5000+0 records out 00:28:37.017 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0357504 s, 286 MB/s 00:28:37.017 11:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aiofile AIO0 2048 00:28:37.017 11:41:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.017 11:41:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:28:37.017 AIO0 00:28:37.017 11:41:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.017 11:41:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:28:37.017 11:41:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.017 11:41:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:28:37.017 [2024-11-20 11:41:22.474501] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:37.017 11:41:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.017 11:41:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:28:37.017 11:41:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.017 11:41:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:28:37.017 11:41:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.017 11:41:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:28:37.017 11:41:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.017 11:41:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:28:37.017 11:41:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.017 11:41:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:28:37.017 11:41:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.017 11:41:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:28:37.017 [2024-11-20 11:41:22.506948] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:28:37.017 11:41:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.017 11:41:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:28:37.017 11:41:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 106379 0 00:28:37.017 11:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 106379 0 idle 00:28:37.017 11:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=106379 00:28:37.017 11:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:28:37.017 11:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:28:37.017 11:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:28:37.017 11:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:28:37.017 11:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:28:37.017 11:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:28:37.017 11:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:28:37.017 11:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:28:37.017 11:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:28:37.017 11:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 106379 -w 256 00:28:37.017 11:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:28:37.276 11:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 106379 root 20 0 64.2g 45568 33024 S 0.0 0.4 0:00.18 reactor_0' 00:28:37.276 11:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 106379 root 20 0 64.2g 45568 33024 S 0.0 0.4 0:00.18 reactor_0 00:28:37.276 11:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:28:37.276 11:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:28:37.276 11:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:28:37.276 11:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:28:37.276 11:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:28:37.276 11:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:28:37.276 11:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:28:37.276 11:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:28:37.276 11:41:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:28:37.276 11:41:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 106379 1 00:28:37.276 11:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 106379 1 idle 00:28:37.276 11:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=106379 00:28:37.276 11:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:28:37.276 11:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:28:37.276 11:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:28:37.276 11:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:28:37.276 11:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:28:37.276 11:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:28:37.276 11:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:28:37.276 11:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:28:37.276 11:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:28:37.276 11:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 106379 -w 256 00:28:37.276 11:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:28:37.276 11:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 106383 root 20 0 64.2g 45568 33024 S 0.0 0.4 0:00.00 reactor_1' 00:28:37.276 11:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 106383 root 20 0 64.2g 45568 33024 S 0.0 0.4 0:00.00 reactor_1 00:28:37.276 11:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:28:37.276 11:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:28:37.536 11:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:28:37.536 11:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:28:37.536 11:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:28:37.536 11:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:28:37.536 11:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:28:37.536 11:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:28:37.536 11:41:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:28:37.536 11:41:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:37.536 11:41:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=106439 00:28:37.536 11:41:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:28:37.536 11:41:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:28:37.536 11:41:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 106379 0 00:28:37.536 11:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 106379 0 busy 00:28:37.536 11:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=106379 00:28:37.536 11:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:28:37.536 11:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:28:37.536 11:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:28:37.536 11:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:28:37.537 11:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:28:37.537 11:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:28:37.537 11:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:28:37.537 11:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:28:37.537 11:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 106379 -w 256 00:28:37.537 11:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:28:37.537 11:41:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 106379 root 20 0 64.2g 45568 33024 S 0.0 0.4 0:00.18 reactor_0' 00:28:37.537 11:41:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 106379 root 20 0 64.2g 45568 33024 S 0.0 0.4 0:00.18 reactor_0 00:28:37.537 11:41:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:28:37.537 11:41:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:28:37.537 11:41:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:28:37.537 11:41:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:28:37.537 11:41:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:28:37.537 11:41:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:28:37.537 11:41:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:28:38.473 11:41:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:28:38.473 11:41:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:28:38.473 11:41:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 106379 -w 256 00:28:38.473 11:41:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:28:38.732 11:41:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 106379 root 20 0 64.2g 46848 33408 R 99.9 0.4 0:01.60 reactor_0' 00:28:38.732 11:41:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 106379 root 20 0 64.2g 46848 33408 R 99.9 0.4 0:01.60 reactor_0 00:28:38.732 11:41:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:28:38.732 11:41:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:28:38.732 11:41:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:28:38.732 11:41:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:28:38.732 11:41:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:28:38.732 11:41:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:28:38.732 11:41:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:28:38.732 11:41:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:28:38.732 11:41:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:28:38.732 11:41:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:28:38.732 11:41:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 106379 1 00:28:38.732 11:41:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 106379 1 busy 00:28:38.732 11:41:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=106379 00:28:38.732 11:41:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:28:38.732 11:41:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:28:38.732 11:41:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:28:38.732 11:41:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:28:38.732 11:41:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:28:38.732 11:41:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:28:38.732 11:41:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:28:38.732 11:41:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:28:38.732 11:41:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 106379 -w 256 00:28:38.732 11:41:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:28:38.990 11:41:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 106383 root 20 0 64.2g 46848 33408 R 66.7 0.4 0:00.82 reactor_1' 00:28:38.990 11:41:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:28:38.990 11:41:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 106383 root 20 0 64.2g 46848 33408 R 66.7 0.4 0:00.82 reactor_1 00:28:38.990 11:41:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:28:38.990 11:41:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=66.7 00:28:38.990 11:41:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=66 00:28:38.990 11:41:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:28:38.990 11:41:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:28:38.990 11:41:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:28:38.990 11:41:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:28:38.990 11:41:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 106439 00:28:48.959 Initializing NVMe Controllers 00:28:48.959 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:28:48.959 Controller IO queue size 256, less than required. 00:28:48.959 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:48.959 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:28:48.959 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:28:48.959 Initialization complete. Launching workers. 00:28:48.959 ======================================================== 00:28:48.959 Latency(us) 00:28:48.959 Device Information : IOPS MiB/s Average min max 00:28:48.959 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 6232.50 24.35 41139.70 4675.63 85881.45 00:28:48.959 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 6356.60 24.83 40347.16 5642.84 77364.21 00:28:48.959 ======================================================== 00:28:48.959 Total : 12589.10 49.18 40739.52 4675.63 85881.45 00:28:48.959 00:28:48.959 11:41:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:28:48.959 11:41:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 106379 0 00:28:48.959 11:41:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 106379 0 idle 00:28:48.959 11:41:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=106379 00:28:48.959 11:41:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:28:48.959 11:41:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:28:48.959 11:41:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:28:48.959 11:41:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:28:48.959 11:41:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:28:48.959 11:41:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:28:48.959 11:41:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:28:48.959 11:41:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:28:48.959 11:41:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:28:48.959 11:41:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:28:48.959 11:41:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 106379 -w 256 00:28:48.959 11:41:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 106379 root 20 0 64.2g 46848 33408 S 6.7 0.4 0:13.40 reactor_0' 00:28:48.959 11:41:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:28:48.959 11:41:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 106379 root 20 0 64.2g 46848 33408 S 6.7 0.4 0:13.40 reactor_0 00:28:48.959 11:41:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:28:48.959 11:41:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:28:48.959 11:41:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:28:48.959 11:41:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:28:48.959 11:41:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:28:48.959 11:41:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:28:48.959 11:41:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:28:48.959 11:41:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:28:48.959 11:41:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 106379 1 00:28:48.959 11:41:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 106379 1 idle 00:28:48.959 11:41:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=106379 00:28:48.959 11:41:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:28:48.959 11:41:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:28:48.959 11:41:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:28:48.959 11:41:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:28:48.959 11:41:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:28:48.959 11:41:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:28:48.959 11:41:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:28:48.959 11:41:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:28:48.959 11:41:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:28:48.959 11:41:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 106379 -w 256 00:28:48.959 11:41:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:28:48.959 11:41:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 106383 root 20 0 64.2g 46848 33408 S 0.0 0.4 0:06.61 reactor_1' 00:28:48.959 11:41:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 106383 root 20 0 64.2g 46848 33408 S 0.0 0.4 0:06.61 reactor_1 00:28:48.959 11:41:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:28:48.959 11:41:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:28:48.959 11:41:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:28:48.959 11:41:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:28:48.959 11:41:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:28:48.959 11:41:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:28:48.959 11:41:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:28:48.959 11:41:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:28:48.959 11:41:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid=806d442c-08ba-4719-b76d-33169283459a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:28:48.959 11:41:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:28:48.959 11:41:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:28:48.959 11:41:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:28:48.959 11:41:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:28:48.959 11:41:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:28:50.338 11:41:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:28:50.338 11:41:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:28:50.338 11:41:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:28:50.338 11:41:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:28:50.338 11:41:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:28:50.338 11:41:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:28:50.338 11:41:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:28:50.338 11:41:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 106379 0 00:28:50.338 11:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 106379 0 idle 00:28:50.338 11:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=106379 00:28:50.338 11:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:28:50.338 11:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:28:50.338 11:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:28:50.338 11:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:28:50.338 11:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:28:50.338 11:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:28:50.338 11:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:28:50.338 11:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:28:50.338 11:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:28:50.338 11:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 106379 -w 256 00:28:50.338 11:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:28:50.338 11:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 106379 root 20 0 64.2g 48896 33408 S 0.0 0.4 0:13.43 reactor_0' 00:28:50.338 11:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 106379 root 20 0 64.2g 48896 33408 S 0.0 0.4 0:13.43 reactor_0 00:28:50.338 11:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:28:50.338 11:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:28:50.338 11:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:28:50.338 11:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:28:50.338 11:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:28:50.338 11:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:28:50.338 11:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:28:50.338 11:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:28:50.338 11:41:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:28:50.338 11:41:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 106379 1 00:28:50.338 11:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 106379 1 idle 00:28:50.338 11:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=106379 00:28:50.338 11:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:28:50.338 11:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:28:50.338 11:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:28:50.338 11:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:28:50.338 11:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:28:50.338 11:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:28:50.338 11:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:28:50.338 11:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:28:50.338 11:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:28:50.338 11:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 106379 -w 256 00:28:50.338 11:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:28:50.597 11:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 106383 root 20 0 64.2g 48896 33408 S 0.0 0.4 0:06.62 reactor_1' 00:28:50.597 11:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:28:50.597 11:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 106383 root 20 0 64.2g 48896 33408 S 0.0 0.4 0:06.62 reactor_1 00:28:50.597 11:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:28:50.597 11:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:28:50.597 11:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:28:50.597 11:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:28:50.597 11:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:28:50.597 11:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:28:50.597 11:41:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:28:50.597 11:41:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:28:50.597 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:28:50.597 11:41:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:28:50.597 11:41:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:28:50.597 11:41:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:28:50.597 11:41:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:50.597 11:41:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:28:50.597 11:41:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:50.597 11:41:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:28:50.597 11:41:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:28:50.597 11:41:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:28:50.597 11:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:50.597 11:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:28:51.164 11:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:51.164 11:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:28:51.164 11:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:51.164 11:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:51.164 rmmod nvme_tcp 00:28:51.164 rmmod nvme_fabrics 00:28:51.164 rmmod nvme_keyring 00:28:51.164 11:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:51.164 11:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:28:51.164 11:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:28:51.164 11:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 106379 ']' 00:28:51.164 11:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 106379 00:28:51.164 11:41:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 106379 ']' 00:28:51.164 11:41:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 106379 00:28:51.164 11:41:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:28:51.164 11:41:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:51.164 11:41:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 106379 00:28:51.164 11:41:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:51.164 11:41:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:51.164 killing process with pid 106379 00:28:51.164 11:41:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 106379' 00:28:51.164 11:41:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 106379 00:28:51.164 11:41:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 106379 00:28:51.164 11:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:51.164 11:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:51.164 11:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:51.164 11:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:28:51.164 11:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:28:51.164 11:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:51.164 11:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:28:51.423 11:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:51.423 11:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:28:51.423 11:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:28:51.423 11:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:28:51.423 11:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:28:51.424 11:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:28:51.424 11:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:28:51.424 11:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:28:51.424 11:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:28:51.424 11:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:28:51.424 11:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:28:51.424 11:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:28:51.424 11:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:28:51.424 11:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:51.424 11:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:51.424 11:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@246 -- # remove_spdk_ns 00:28:51.424 11:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:51.424 11:41:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:51.424 11:41:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:51.685 11:41:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@300 -- # return 0 00:28:51.685 00:28:51.685 real 0m15.666s 00:28:51.685 user 0m27.646s 00:28:51.685 sys 0m7.489s 00:28:51.685 11:41:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:51.685 11:41:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:28:51.685 ************************************ 00:28:51.685 END TEST nvmf_interrupt 00:28:51.685 ************************************ 00:28:51.685 00:28:51.685 real 20m40.983s 00:28:51.685 user 54m42.526s 00:28:51.685 sys 4m54.927s 00:28:51.685 11:41:37 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:51.685 ************************************ 00:28:51.685 END TEST nvmf_tcp 00:28:51.685 ************************************ 00:28:51.685 11:41:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:51.685 11:41:37 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:28:51.685 11:41:37 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:28:51.685 11:41:37 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:51.685 11:41:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:51.685 11:41:37 -- common/autotest_common.sh@10 -- # set +x 00:28:51.685 ************************************ 00:28:51.685 START TEST spdkcli_nvmf_tcp 00:28:51.685 ************************************ 00:28:51.685 11:41:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:28:51.685 * Looking for test storage... 00:28:51.685 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:28:51.685 11:41:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:51.685 11:41:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:28:51.685 11:41:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:51.685 11:41:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:51.685 11:41:37 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:51.685 11:41:37 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:51.685 11:41:37 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:51.685 11:41:37 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:28:51.685 11:41:37 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:28:51.685 11:41:37 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:28:51.685 11:41:37 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:28:51.943 11:41:37 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:28:51.943 11:41:37 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:28:51.943 11:41:37 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:28:51.943 11:41:37 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:51.943 11:41:37 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:28:51.943 11:41:37 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:28:51.943 11:41:37 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:51.943 11:41:37 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:51.943 11:41:37 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:28:51.943 11:41:37 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:28:51.943 11:41:37 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:51.943 11:41:37 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:28:51.943 11:41:37 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:28:51.943 11:41:37 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:28:51.943 11:41:37 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:28:51.943 11:41:37 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:51.943 11:41:37 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:28:51.943 11:41:37 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:28:51.943 11:41:37 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:51.943 11:41:37 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:51.943 11:41:37 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:28:51.943 11:41:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:51.943 11:41:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:51.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:51.943 --rc genhtml_branch_coverage=1 00:28:51.943 --rc genhtml_function_coverage=1 00:28:51.944 --rc genhtml_legend=1 00:28:51.944 --rc geninfo_all_blocks=1 00:28:51.944 --rc geninfo_unexecuted_blocks=1 00:28:51.944 00:28:51.944 ' 00:28:51.944 11:41:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:51.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:51.944 --rc genhtml_branch_coverage=1 00:28:51.944 --rc genhtml_function_coverage=1 00:28:51.944 --rc genhtml_legend=1 00:28:51.944 --rc geninfo_all_blocks=1 00:28:51.944 --rc geninfo_unexecuted_blocks=1 00:28:51.944 00:28:51.944 ' 00:28:51.944 11:41:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:51.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:51.944 --rc genhtml_branch_coverage=1 00:28:51.944 --rc genhtml_function_coverage=1 00:28:51.944 --rc genhtml_legend=1 00:28:51.944 --rc geninfo_all_blocks=1 00:28:51.944 --rc geninfo_unexecuted_blocks=1 00:28:51.944 00:28:51.944 ' 00:28:51.944 11:41:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:51.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:51.944 --rc genhtml_branch_coverage=1 00:28:51.944 --rc genhtml_function_coverage=1 00:28:51.944 --rc genhtml_legend=1 00:28:51.944 --rc geninfo_all_blocks=1 00:28:51.944 --rc geninfo_unexecuted_blocks=1 00:28:51.944 00:28:51.944 ' 00:28:51.944 11:41:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:28:51.944 11:41:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:28:51.944 11:41:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:28:51.944 11:41:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:51.944 11:41:37 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:28:51.944 11:41:37 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:51.944 11:41:37 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:51.944 11:41:37 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:51.944 11:41:37 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:51.944 11:41:37 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:51.944 11:41:37 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:51.944 11:41:37 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:51.944 11:41:37 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:51.944 11:41:37 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:51.944 11:41:37 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:51.944 11:41:37 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:28:51.944 11:41:37 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=806d442c-08ba-4719-b76d-33169283459a 00:28:51.944 11:41:37 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:51.944 11:41:37 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:51.944 11:41:37 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:51.944 11:41:37 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:51.944 11:41:37 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:51.944 11:41:37 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:28:51.944 11:41:37 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:51.944 11:41:37 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:51.944 11:41:37 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:51.944 11:41:37 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.944 11:41:37 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.944 11:41:37 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.944 11:41:37 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:28:51.944 11:41:37 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.944 11:41:37 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:28:51.944 11:41:37 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:51.944 11:41:37 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:51.944 11:41:37 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:51.944 11:41:37 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:51.944 11:41:37 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:51.944 11:41:37 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:51.944 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:51.944 11:41:37 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:51.944 11:41:37 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:51.944 11:41:37 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:51.944 11:41:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:28:51.944 11:41:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:28:51.944 11:41:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:28:51.944 11:41:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:28:51.944 11:41:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:51.944 11:41:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:51.944 11:41:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:28:51.944 11:41:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=106774 00:28:51.944 11:41:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 106774 00:28:51.944 11:41:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 106774 ']' 00:28:51.944 11:41:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:28:51.944 11:41:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:51.944 11:41:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:51.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:51.944 11:41:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:51.944 11:41:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:51.944 11:41:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:51.944 [2024-11-20 11:41:37.372342] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:28:51.944 [2024-11-20 11:41:37.372448] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106774 ] 00:28:51.944 [2024-11-20 11:41:37.524761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:52.203 [2024-11-20 11:41:37.569665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:52.203 [2024-11-20 11:41:37.569703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:52.203 11:41:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:52.203 11:41:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:28:52.203 11:41:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:28:52.203 11:41:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:52.203 11:41:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:52.203 11:41:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:28:52.203 11:41:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:28:52.203 11:41:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:28:52.203 11:41:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:52.203 11:41:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:52.203 11:41:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:28:52.203 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:28:52.203 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:28:52.203 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:28:52.203 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:28:52.203 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:28:52.203 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:28:52.203 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:28:52.203 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:28:52.203 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:28:52.203 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:28:52.203 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:52.203 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:28:52.203 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:28:52.203 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:52.203 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:28:52.203 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:28:52.203 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:28:52.203 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:28:52.203 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:52.203 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:28:52.203 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:28:52.203 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:28:52.203 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:28:52.203 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:52.203 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:28:52.203 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:28:52.203 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:28:52.203 ' 00:28:55.611 [2024-11-20 11:41:40.465982] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:56.545 [2024-11-20 11:41:41.778943] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:28:59.074 [2024-11-20 11:41:44.264979] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:29:00.972 [2024-11-20 11:41:46.402484] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:29:02.875 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:29:02.875 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:29:02.875 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:29:02.875 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:29:02.875 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:29:02.875 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:29:02.875 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:29:02.875 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:02.875 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:29:02.875 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:29:02.875 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:02.875 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:02.875 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:29:02.875 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:02.875 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:02.875 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:29:02.875 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:02.875 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:29:02.875 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:02.875 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:02.875 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:29:02.875 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:29:02.875 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:29:02.875 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:29:02.875 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:02.875 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:29:02.875 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:29:02.875 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:29:02.875 11:41:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:29:02.875 11:41:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:02.875 11:41:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:02.875 11:41:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:29:02.875 11:41:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:02.875 11:41:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:02.875 11:41:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:29:02.875 11:41:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:29:03.133 11:41:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:29:03.394 11:41:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:29:03.394 11:41:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:29:03.394 11:41:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:03.394 11:41:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:03.394 11:41:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:29:03.394 11:41:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:03.394 11:41:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:03.394 11:41:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:29:03.394 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:29:03.394 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:03.394 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:29:03.394 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:29:03.394 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:29:03.394 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:29:03.394 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:03.394 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:29:03.394 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:29:03.394 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:29:03.394 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:29:03.394 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:29:03.394 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:29:03.394 ' 00:29:09.967 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:29:09.967 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:29:09.967 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:09.967 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:29:09.967 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:29:09.967 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:29:09.967 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:29:09.967 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:09.967 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:29:09.967 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:29:09.967 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:29:09.967 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:29:09.967 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:29:09.967 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:29:09.967 11:41:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:29:09.967 11:41:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:09.967 11:41:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:09.967 11:41:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 106774 00:29:09.967 11:41:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 106774 ']' 00:29:09.967 11:41:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 106774 00:29:09.967 11:41:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:29:09.967 11:41:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:09.967 11:41:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 106774 00:29:09.967 11:41:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:09.967 killing process with pid 106774 00:29:09.967 11:41:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:09.967 11:41:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 106774' 00:29:09.967 11:41:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 106774 00:29:09.967 11:41:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 106774 00:29:09.967 11:41:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:29:09.967 11:41:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:29:09.967 11:41:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 106774 ']' 00:29:09.967 11:41:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 106774 00:29:09.967 11:41:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 106774 ']' 00:29:09.967 11:41:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 106774 00:29:09.967 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (106774) - No such process 00:29:09.967 Process with pid 106774 is not found 00:29:09.967 11:41:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 106774 is not found' 00:29:09.967 11:41:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:29:09.967 11:41:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:29:09.967 11:41:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:29:09.967 00:29:09.967 real 0m17.559s 00:29:09.967 user 0m38.537s 00:29:09.967 sys 0m0.778s 00:29:09.967 11:41:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:09.967 ************************************ 00:29:09.967 END TEST spdkcli_nvmf_tcp 00:29:09.967 ************************************ 00:29:09.967 11:41:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:09.967 11:41:54 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:29:09.967 11:41:54 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:09.967 11:41:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:09.967 11:41:54 -- common/autotest_common.sh@10 -- # set +x 00:29:09.967 ************************************ 00:29:09.967 START TEST nvmf_identify_passthru 00:29:09.967 ************************************ 00:29:09.967 11:41:54 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:29:09.967 * Looking for test storage... 00:29:09.967 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:29:09.967 11:41:54 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:09.967 11:41:54 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:29:09.967 11:41:54 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:09.967 11:41:54 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:09.967 11:41:54 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:09.967 11:41:54 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:09.967 11:41:54 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:09.967 11:41:54 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:29:09.967 11:41:54 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:29:09.967 11:41:54 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:29:09.967 11:41:54 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:29:09.967 11:41:54 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:29:09.967 11:41:54 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:29:09.967 11:41:54 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:29:09.967 11:41:54 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:09.967 11:41:54 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:29:09.967 11:41:54 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:29:09.967 11:41:54 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:09.967 11:41:54 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:09.967 11:41:54 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:29:09.967 11:41:54 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:29:09.967 11:41:54 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:09.967 11:41:54 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:29:09.967 11:41:54 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:29:09.967 11:41:54 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:29:09.967 11:41:54 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:29:09.967 11:41:54 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:09.967 11:41:54 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:29:09.967 11:41:54 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:29:09.967 11:41:54 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:09.967 11:41:54 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:09.967 11:41:54 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:29:09.967 11:41:54 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:09.967 11:41:54 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:09.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:09.967 --rc genhtml_branch_coverage=1 00:29:09.967 --rc genhtml_function_coverage=1 00:29:09.967 --rc genhtml_legend=1 00:29:09.967 --rc geninfo_all_blocks=1 00:29:09.967 --rc geninfo_unexecuted_blocks=1 00:29:09.967 00:29:09.967 ' 00:29:09.967 11:41:54 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:09.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:09.968 --rc genhtml_branch_coverage=1 00:29:09.968 --rc genhtml_function_coverage=1 00:29:09.968 --rc genhtml_legend=1 00:29:09.968 --rc geninfo_all_blocks=1 00:29:09.968 --rc geninfo_unexecuted_blocks=1 00:29:09.968 00:29:09.968 ' 00:29:09.968 11:41:54 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:09.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:09.968 --rc genhtml_branch_coverage=1 00:29:09.968 --rc genhtml_function_coverage=1 00:29:09.968 --rc genhtml_legend=1 00:29:09.968 --rc geninfo_all_blocks=1 00:29:09.968 --rc geninfo_unexecuted_blocks=1 00:29:09.968 00:29:09.968 ' 00:29:09.968 11:41:54 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:09.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:09.968 --rc genhtml_branch_coverage=1 00:29:09.968 --rc genhtml_function_coverage=1 00:29:09.968 --rc genhtml_legend=1 00:29:09.968 --rc geninfo_all_blocks=1 00:29:09.968 --rc geninfo_unexecuted_blocks=1 00:29:09.968 00:29:09.968 ' 00:29:09.968 11:41:54 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:09.968 11:41:54 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:29:09.968 11:41:54 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:09.968 11:41:54 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:09.968 11:41:54 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:09.968 11:41:54 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:09.968 11:41:54 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:09.968 11:41:54 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:09.968 11:41:54 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:09.968 11:41:54 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:09.968 11:41:54 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:09.968 11:41:54 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:09.968 11:41:54 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:29:09.968 11:41:54 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=806d442c-08ba-4719-b76d-33169283459a 00:29:09.968 11:41:54 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:09.968 11:41:54 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:09.968 11:41:54 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:09.968 11:41:54 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:09.968 11:41:54 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:09.968 11:41:54 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:29:09.968 11:41:54 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:09.968 11:41:54 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:09.968 11:41:54 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:09.968 11:41:54 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.968 11:41:54 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.968 11:41:54 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.968 11:41:54 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:29:09.968 11:41:54 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.968 11:41:54 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:29:09.968 11:41:54 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:09.968 11:41:54 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:09.968 11:41:54 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:09.968 11:41:54 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:09.968 11:41:54 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:09.968 11:41:54 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:09.968 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:09.968 11:41:54 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:09.968 11:41:54 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:09.968 11:41:54 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:09.968 11:41:54 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:09.968 11:41:54 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:29:09.968 11:41:54 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:09.968 11:41:54 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:09.968 11:41:54 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:09.968 11:41:54 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.968 11:41:54 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.968 11:41:54 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.968 11:41:54 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:29:09.968 11:41:54 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.968 11:41:54 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:29:09.968 11:41:54 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:09.968 11:41:54 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:09.968 11:41:54 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:09.968 11:41:54 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:09.968 11:41:54 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:09.968 11:41:54 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:09.968 11:41:54 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:09.968 11:41:54 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:09.968 11:41:54 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:29:09.968 11:41:54 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:29:09.968 11:41:54 nvmf_identify_passthru -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:29:09.968 11:41:54 nvmf_identify_passthru -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:29:09.968 11:41:54 nvmf_identify_passthru -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:29:09.968 11:41:54 nvmf_identify_passthru -- nvmf/common.sh@460 -- # nvmf_veth_init 00:29:09.968 11:41:54 nvmf_identify_passthru -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:09.968 11:41:54 nvmf_identify_passthru -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:29:09.968 11:41:54 nvmf_identify_passthru -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:29:09.968 11:41:54 nvmf_identify_passthru -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:29:09.968 11:41:54 nvmf_identify_passthru -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:09.968 11:41:54 nvmf_identify_passthru -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:29:09.968 11:41:54 nvmf_identify_passthru -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:09.968 11:41:54 nvmf_identify_passthru -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:29:09.968 11:41:54 nvmf_identify_passthru -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:09.968 11:41:54 nvmf_identify_passthru -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:29:09.968 11:41:54 nvmf_identify_passthru -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:09.968 11:41:54 nvmf_identify_passthru -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:09.968 11:41:54 nvmf_identify_passthru -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:09.968 11:41:54 nvmf_identify_passthru -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:09.968 11:41:54 nvmf_identify_passthru -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:09.969 11:41:54 nvmf_identify_passthru -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:09.969 11:41:54 nvmf_identify_passthru -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:29:09.969 Cannot find device "nvmf_init_br" 00:29:09.969 11:41:54 nvmf_identify_passthru -- nvmf/common.sh@162 -- # true 00:29:09.969 11:41:54 nvmf_identify_passthru -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:29:09.969 Cannot find device "nvmf_init_br2" 00:29:09.969 11:41:54 nvmf_identify_passthru -- nvmf/common.sh@163 -- # true 00:29:09.969 11:41:54 nvmf_identify_passthru -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:29:09.969 Cannot find device "nvmf_tgt_br" 00:29:09.969 11:41:54 nvmf_identify_passthru -- nvmf/common.sh@164 -- # true 00:29:09.969 11:41:54 nvmf_identify_passthru -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:29:09.969 Cannot find device "nvmf_tgt_br2" 00:29:09.969 11:41:54 nvmf_identify_passthru -- nvmf/common.sh@165 -- # true 00:29:09.969 11:41:54 nvmf_identify_passthru -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:29:09.969 Cannot find device "nvmf_init_br" 00:29:09.969 11:41:54 nvmf_identify_passthru -- nvmf/common.sh@166 -- # true 00:29:09.969 11:41:54 nvmf_identify_passthru -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:29:09.969 Cannot find device "nvmf_init_br2" 00:29:09.969 11:41:54 nvmf_identify_passthru -- nvmf/common.sh@167 -- # true 00:29:09.969 11:41:54 nvmf_identify_passthru -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:29:09.969 Cannot find device "nvmf_tgt_br" 00:29:09.969 11:41:54 nvmf_identify_passthru -- nvmf/common.sh@168 -- # true 00:29:09.969 11:41:54 nvmf_identify_passthru -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:29:09.969 Cannot find device "nvmf_tgt_br2" 00:29:09.969 11:41:54 nvmf_identify_passthru -- nvmf/common.sh@169 -- # true 00:29:09.969 11:41:54 nvmf_identify_passthru -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:29:09.969 Cannot find device "nvmf_br" 00:29:09.969 11:41:54 nvmf_identify_passthru -- nvmf/common.sh@170 -- # true 00:29:09.969 11:41:54 nvmf_identify_passthru -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:29:09.969 Cannot find device "nvmf_init_if" 00:29:09.969 11:41:55 nvmf_identify_passthru -- nvmf/common.sh@171 -- # true 00:29:09.969 11:41:55 nvmf_identify_passthru -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:29:09.969 Cannot find device "nvmf_init_if2" 00:29:09.969 11:41:55 nvmf_identify_passthru -- nvmf/common.sh@172 -- # true 00:29:09.969 11:41:55 nvmf_identify_passthru -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:09.969 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:09.969 11:41:55 nvmf_identify_passthru -- nvmf/common.sh@173 -- # true 00:29:09.969 11:41:55 nvmf_identify_passthru -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:09.969 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:09.969 11:41:55 nvmf_identify_passthru -- nvmf/common.sh@174 -- # true 00:29:09.969 11:41:55 nvmf_identify_passthru -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:29:09.969 11:41:55 nvmf_identify_passthru -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:09.969 11:41:55 nvmf_identify_passthru -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:29:09.969 11:41:55 nvmf_identify_passthru -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:09.969 11:41:55 nvmf_identify_passthru -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:09.969 11:41:55 nvmf_identify_passthru -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:09.969 11:41:55 nvmf_identify_passthru -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:09.969 11:41:55 nvmf_identify_passthru -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:09.969 11:41:55 nvmf_identify_passthru -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:29:09.969 11:41:55 nvmf_identify_passthru -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:29:09.969 11:41:55 nvmf_identify_passthru -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:29:09.969 11:41:55 nvmf_identify_passthru -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:29:09.969 11:41:55 nvmf_identify_passthru -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:29:09.969 11:41:55 nvmf_identify_passthru -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:29:09.969 11:41:55 nvmf_identify_passthru -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:29:09.969 11:41:55 nvmf_identify_passthru -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:29:09.969 11:41:55 nvmf_identify_passthru -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:29:09.969 11:41:55 nvmf_identify_passthru -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:09.969 11:41:55 nvmf_identify_passthru -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:09.969 11:41:55 nvmf_identify_passthru -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:09.969 11:41:55 nvmf_identify_passthru -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:29:09.969 11:41:55 nvmf_identify_passthru -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:29:09.969 11:41:55 nvmf_identify_passthru -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:29:09.969 11:41:55 nvmf_identify_passthru -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:29:09.969 11:41:55 nvmf_identify_passthru -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:09.969 11:41:55 nvmf_identify_passthru -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:09.969 11:41:55 nvmf_identify_passthru -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:09.969 11:41:55 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:29:09.969 11:41:55 nvmf_identify_passthru -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:29:09.969 11:41:55 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:29:09.969 11:41:55 nvmf_identify_passthru -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:09.969 11:41:55 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:29:09.969 11:41:55 nvmf_identify_passthru -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:29:09.969 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:09.969 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:29:09.969 00:29:09.969 --- 10.0.0.3 ping statistics --- 00:29:09.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:09.969 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:29:09.969 11:41:55 nvmf_identify_passthru -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:29:09.969 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:29:09.969 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.039 ms 00:29:09.969 00:29:09.969 --- 10.0.0.4 ping statistics --- 00:29:09.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:09.969 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:29:09.969 11:41:55 nvmf_identify_passthru -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:09.969 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:09.969 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:29:09.969 00:29:09.969 --- 10.0.0.1 ping statistics --- 00:29:09.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:09.969 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:29:09.969 11:41:55 nvmf_identify_passthru -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:29:09.969 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:09.969 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:29:09.969 00:29:09.969 --- 10.0.0.2 ping statistics --- 00:29:09.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:09.969 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:29:09.969 11:41:55 nvmf_identify_passthru -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:09.969 11:41:55 nvmf_identify_passthru -- nvmf/common.sh@461 -- # return 0 00:29:09.969 11:41:55 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:09.969 11:41:55 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:09.969 11:41:55 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:09.969 11:41:55 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:09.969 11:41:55 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:09.969 11:41:55 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:09.969 11:41:55 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:09.969 11:41:55 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:29:09.969 11:41:55 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:09.969 11:41:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:09.969 11:41:55 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:29:09.969 11:41:55 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:29:09.969 11:41:55 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:29:09.969 11:41:55 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:29:09.969 11:41:55 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:29:09.969 11:41:55 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:29:09.969 11:41:55 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:29:09.969 11:41:55 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:09.969 11:41:55 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:29:09.969 11:41:55 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:29:09.969 11:41:55 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:29:09.969 11:41:55 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:29:09.969 11:41:55 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:29:09.969 11:41:55 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:00:10.0 00:29:09.969 11:41:55 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:10.0 ']' 00:29:09.969 11:41:55 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:29:09.969 11:41:55 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:29:09.969 11:41:55 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:29:09.969 11:41:55 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:29:09.969 11:41:55 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:29:09.969 11:41:55 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:29:09.970 11:41:55 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:29:10.229 11:41:55 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:29:10.229 11:41:55 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:29:10.229 11:41:55 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:10.229 11:41:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:10.229 11:41:55 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:29:10.229 11:41:55 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:10.229 11:41:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:10.229 11:41:55 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=107272 00:29:10.229 11:41:55 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:10.229 11:41:55 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:29:10.229 11:41:55 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 107272 00:29:10.229 11:41:55 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 107272 ']' 00:29:10.229 11:41:55 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:10.229 11:41:55 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:10.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:10.229 11:41:55 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:10.229 11:41:55 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:10.229 11:41:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:10.487 [2024-11-20 11:41:55.833388] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:29:10.488 [2024-11-20 11:41:55.833683] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:10.488 [2024-11-20 11:41:55.979979] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:10.488 [2024-11-20 11:41:56.014714] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:10.488 [2024-11-20 11:41:56.014775] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:10.488 [2024-11-20 11:41:56.014788] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:10.488 [2024-11-20 11:41:56.014796] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:10.488 [2024-11-20 11:41:56.014803] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:10.488 [2024-11-20 11:41:56.017658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:10.488 [2024-11-20 11:41:56.017773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:10.488 [2024-11-20 11:41:56.017990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:10.488 [2024-11-20 11:41:56.017998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:10.746 11:41:56 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:10.746 11:41:56 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:29:10.746 11:41:56 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:29:10.746 11:41:56 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.746 11:41:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:10.746 11:41:56 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.746 11:41:56 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:29:10.746 11:41:56 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.746 11:41:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:10.746 [2024-11-20 11:41:56.215123] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:29:10.746 11:41:56 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.746 11:41:56 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:10.746 11:41:56 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.746 11:41:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:10.746 [2024-11-20 11:41:56.228714] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:10.746 11:41:56 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.746 11:41:56 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:29:10.746 11:41:56 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:10.746 11:41:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:10.746 11:41:56 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:29:10.746 11:41:56 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.746 11:41:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:11.004 Nvme0n1 00:29:11.004 11:41:56 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.004 11:41:56 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:29:11.004 11:41:56 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.004 11:41:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:11.004 11:41:56 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.004 11:41:56 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:29:11.004 11:41:56 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.004 11:41:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:11.004 11:41:56 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.004 11:41:56 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:29:11.004 11:41:56 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.004 11:41:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:11.004 [2024-11-20 11:41:56.367151] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:29:11.004 11:41:56 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.004 11:41:56 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:29:11.004 11:41:56 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.004 11:41:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:11.004 [ 00:29:11.004 { 00:29:11.004 "allow_any_host": true, 00:29:11.004 "hosts": [], 00:29:11.005 "listen_addresses": [], 00:29:11.005 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:11.005 "subtype": "Discovery" 00:29:11.005 }, 00:29:11.005 { 00:29:11.005 "allow_any_host": true, 00:29:11.005 "hosts": [], 00:29:11.005 "listen_addresses": [ 00:29:11.005 { 00:29:11.005 "adrfam": "IPv4", 00:29:11.005 "traddr": "10.0.0.3", 00:29:11.005 "trsvcid": "4420", 00:29:11.005 "trtype": "TCP" 00:29:11.005 } 00:29:11.005 ], 00:29:11.005 "max_cntlid": 65519, 00:29:11.005 "max_namespaces": 1, 00:29:11.005 "min_cntlid": 1, 00:29:11.005 "model_number": "SPDK bdev Controller", 00:29:11.005 "namespaces": [ 00:29:11.005 { 00:29:11.005 "bdev_name": "Nvme0n1", 00:29:11.005 "name": "Nvme0n1", 00:29:11.005 "nguid": "5A459279B6224FFFBE8FF287C3F96100", 00:29:11.005 "nsid": 1, 00:29:11.005 "uuid": "5a459279-b622-4fff-be8f-f287c3f96100" 00:29:11.005 } 00:29:11.005 ], 00:29:11.005 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:11.005 "serial_number": "SPDK00000000000001", 00:29:11.005 "subtype": "NVMe" 00:29:11.005 } 00:29:11.005 ] 00:29:11.005 11:41:56 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.005 11:41:56 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:11.005 11:41:56 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:29:11.005 11:41:56 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:29:11.262 11:41:56 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:29:11.262 11:41:56 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:29:11.262 11:41:56 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:11.262 11:41:56 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:29:11.520 11:41:56 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:29:11.520 11:41:56 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:29:11.520 11:41:56 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:29:11.520 11:41:56 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:11.520 11:41:56 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.520 11:41:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:11.520 11:41:56 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.520 11:41:56 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:29:11.520 11:41:56 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:29:11.520 11:41:56 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:11.520 11:41:56 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:29:11.520 11:41:56 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:11.520 11:41:56 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:29:11.520 11:41:56 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:11.520 11:41:56 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:11.520 rmmod nvme_tcp 00:29:11.520 rmmod nvme_fabrics 00:29:11.520 rmmod nvme_keyring 00:29:11.520 11:41:56 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:11.520 11:41:57 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:29:11.520 11:41:57 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:29:11.521 11:41:57 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 107272 ']' 00:29:11.521 11:41:57 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 107272 00:29:11.521 11:41:57 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 107272 ']' 00:29:11.521 11:41:57 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 107272 00:29:11.521 11:41:57 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:29:11.521 11:41:57 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:11.521 11:41:57 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 107272 00:29:11.521 killing process with pid 107272 00:29:11.521 11:41:57 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:11.521 11:41:57 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:11.521 11:41:57 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 107272' 00:29:11.521 11:41:57 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 107272 00:29:11.521 11:41:57 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 107272 00:29:11.778 11:41:57 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:11.778 11:41:57 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:11.778 11:41:57 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:11.778 11:41:57 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:29:11.778 11:41:57 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:29:11.778 11:41:57 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:11.778 11:41:57 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:29:11.778 11:41:57 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:11.778 11:41:57 nvmf_identify_passthru -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:29:11.778 11:41:57 nvmf_identify_passthru -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:29:11.778 11:41:57 nvmf_identify_passthru -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:29:11.779 11:41:57 nvmf_identify_passthru -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:29:11.779 11:41:57 nvmf_identify_passthru -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:29:11.779 11:41:57 nvmf_identify_passthru -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:29:11.779 11:41:57 nvmf_identify_passthru -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:29:11.779 11:41:57 nvmf_identify_passthru -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:29:11.779 11:41:57 nvmf_identify_passthru -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:29:11.779 11:41:57 nvmf_identify_passthru -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:29:11.779 11:41:57 nvmf_identify_passthru -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:29:11.779 11:41:57 nvmf_identify_passthru -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:29:11.779 11:41:57 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:11.779 11:41:57 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:12.037 11:41:57 nvmf_identify_passthru -- nvmf/common.sh@246 -- # remove_spdk_ns 00:29:12.037 11:41:57 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:12.037 11:41:57 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:12.037 11:41:57 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:12.037 11:41:57 nvmf_identify_passthru -- nvmf/common.sh@300 -- # return 0 00:29:12.037 00:29:12.037 real 0m2.707s 00:29:12.037 user 0m5.218s 00:29:12.037 sys 0m0.835s 00:29:12.037 11:41:57 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:12.037 ************************************ 00:29:12.037 END TEST nvmf_identify_passthru 00:29:12.037 11:41:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:12.037 ************************************ 00:29:12.037 11:41:57 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:29:12.037 11:41:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:12.037 11:41:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:12.037 11:41:57 -- common/autotest_common.sh@10 -- # set +x 00:29:12.037 ************************************ 00:29:12.037 START TEST nvmf_dif 00:29:12.037 ************************************ 00:29:12.037 11:41:57 nvmf_dif -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:29:12.037 * Looking for test storage... 00:29:12.037 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:29:12.037 11:41:57 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:12.037 11:41:57 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:29:12.037 11:41:57 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:12.296 11:41:57 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:12.296 11:41:57 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:12.296 11:41:57 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:12.296 11:41:57 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:12.296 11:41:57 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:29:12.296 11:41:57 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:29:12.296 11:41:57 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:29:12.296 11:41:57 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:29:12.296 11:41:57 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:29:12.296 11:41:57 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:29:12.296 11:41:57 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:29:12.296 11:41:57 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:12.296 11:41:57 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:29:12.296 11:41:57 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:29:12.296 11:41:57 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:12.296 11:41:57 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:12.296 11:41:57 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:29:12.296 11:41:57 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:29:12.296 11:41:57 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:12.296 11:41:57 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:29:12.296 11:41:57 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:29:12.296 11:41:57 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:29:12.296 11:41:57 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:29:12.296 11:41:57 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:12.296 11:41:57 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:29:12.296 11:41:57 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:29:12.296 11:41:57 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:12.296 11:41:57 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:12.296 11:41:57 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:29:12.296 11:41:57 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:12.296 11:41:57 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:12.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:12.296 --rc genhtml_branch_coverage=1 00:29:12.296 --rc genhtml_function_coverage=1 00:29:12.296 --rc genhtml_legend=1 00:29:12.296 --rc geninfo_all_blocks=1 00:29:12.296 --rc geninfo_unexecuted_blocks=1 00:29:12.296 00:29:12.296 ' 00:29:12.296 11:41:57 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:12.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:12.296 --rc genhtml_branch_coverage=1 00:29:12.296 --rc genhtml_function_coverage=1 00:29:12.296 --rc genhtml_legend=1 00:29:12.296 --rc geninfo_all_blocks=1 00:29:12.296 --rc geninfo_unexecuted_blocks=1 00:29:12.296 00:29:12.296 ' 00:29:12.296 11:41:57 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:12.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:12.296 --rc genhtml_branch_coverage=1 00:29:12.296 --rc genhtml_function_coverage=1 00:29:12.296 --rc genhtml_legend=1 00:29:12.296 --rc geninfo_all_blocks=1 00:29:12.296 --rc geninfo_unexecuted_blocks=1 00:29:12.296 00:29:12.296 ' 00:29:12.296 11:41:57 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:12.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:12.296 --rc genhtml_branch_coverage=1 00:29:12.296 --rc genhtml_function_coverage=1 00:29:12.296 --rc genhtml_legend=1 00:29:12.296 --rc geninfo_all_blocks=1 00:29:12.296 --rc geninfo_unexecuted_blocks=1 00:29:12.296 00:29:12.296 ' 00:29:12.296 11:41:57 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:12.296 11:41:57 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:29:12.296 11:41:57 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:12.296 11:41:57 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:12.296 11:41:57 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:12.296 11:41:57 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:12.296 11:41:57 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:12.296 11:41:57 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:12.296 11:41:57 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:12.296 11:41:57 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:12.296 11:41:57 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:12.296 11:41:57 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:12.296 11:41:57 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:29:12.296 11:41:57 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=806d442c-08ba-4719-b76d-33169283459a 00:29:12.296 11:41:57 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:12.296 11:41:57 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:12.296 11:41:57 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:12.296 11:41:57 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:12.296 11:41:57 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:12.296 11:41:57 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:29:12.296 11:41:57 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:12.296 11:41:57 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:12.296 11:41:57 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:12.296 11:41:57 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:12.296 11:41:57 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:12.296 11:41:57 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:12.296 11:41:57 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:29:12.296 11:41:57 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:12.296 11:41:57 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:29:12.296 11:41:57 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:12.296 11:41:57 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:12.296 11:41:57 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:12.296 11:41:57 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:12.296 11:41:57 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:12.296 11:41:57 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:12.296 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:12.296 11:41:57 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:12.296 11:41:57 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:12.296 11:41:57 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:12.296 11:41:57 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:29:12.296 11:41:57 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:29:12.296 11:41:57 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:29:12.296 11:41:57 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:29:12.296 11:41:57 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:29:12.296 11:41:57 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:12.296 11:41:57 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:12.296 11:41:57 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:12.296 11:41:57 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:12.296 11:41:57 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:12.296 11:41:57 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:12.297 11:41:57 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:12.297 11:41:57 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:12.297 11:41:57 nvmf_dif -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:29:12.297 11:41:57 nvmf_dif -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:29:12.297 11:41:57 nvmf_dif -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:29:12.297 11:41:57 nvmf_dif -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:29:12.297 11:41:57 nvmf_dif -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:29:12.297 11:41:57 nvmf_dif -- nvmf/common.sh@460 -- # nvmf_veth_init 00:29:12.297 11:41:57 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:12.297 11:41:57 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:29:12.297 11:41:57 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:29:12.297 11:41:57 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:29:12.297 11:41:57 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:12.297 11:41:57 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:29:12.297 11:41:57 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:12.297 11:41:57 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:29:12.297 11:41:57 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:12.297 11:41:57 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:29:12.297 11:41:57 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:12.297 11:41:57 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:12.297 11:41:57 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:12.297 11:41:57 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:12.297 11:41:57 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:12.297 11:41:57 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:12.297 11:41:57 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:29:12.297 Cannot find device "nvmf_init_br" 00:29:12.297 11:41:57 nvmf_dif -- nvmf/common.sh@162 -- # true 00:29:12.297 11:41:57 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:29:12.297 Cannot find device "nvmf_init_br2" 00:29:12.297 11:41:57 nvmf_dif -- nvmf/common.sh@163 -- # true 00:29:12.297 11:41:57 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:29:12.297 Cannot find device "nvmf_tgt_br" 00:29:12.297 11:41:57 nvmf_dif -- nvmf/common.sh@164 -- # true 00:29:12.297 11:41:57 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:29:12.297 Cannot find device "nvmf_tgt_br2" 00:29:12.297 11:41:57 nvmf_dif -- nvmf/common.sh@165 -- # true 00:29:12.297 11:41:57 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:29:12.297 Cannot find device "nvmf_init_br" 00:29:12.297 11:41:57 nvmf_dif -- nvmf/common.sh@166 -- # true 00:29:12.297 11:41:57 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:29:12.297 Cannot find device "nvmf_init_br2" 00:29:12.297 11:41:57 nvmf_dif -- nvmf/common.sh@167 -- # true 00:29:12.297 11:41:57 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:29:12.297 Cannot find device "nvmf_tgt_br" 00:29:12.297 11:41:57 nvmf_dif -- nvmf/common.sh@168 -- # true 00:29:12.297 11:41:57 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:29:12.297 Cannot find device "nvmf_tgt_br2" 00:29:12.297 11:41:57 nvmf_dif -- nvmf/common.sh@169 -- # true 00:29:12.297 11:41:57 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:29:12.297 Cannot find device "nvmf_br" 00:29:12.297 11:41:57 nvmf_dif -- nvmf/common.sh@170 -- # true 00:29:12.297 11:41:57 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:29:12.297 Cannot find device "nvmf_init_if" 00:29:12.297 11:41:57 nvmf_dif -- nvmf/common.sh@171 -- # true 00:29:12.297 11:41:57 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:29:12.297 Cannot find device "nvmf_init_if2" 00:29:12.297 11:41:57 nvmf_dif -- nvmf/common.sh@172 -- # true 00:29:12.297 11:41:57 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:12.297 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:12.297 11:41:57 nvmf_dif -- nvmf/common.sh@173 -- # true 00:29:12.297 11:41:57 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:12.297 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:12.297 11:41:57 nvmf_dif -- nvmf/common.sh@174 -- # true 00:29:12.297 11:41:57 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:29:12.297 11:41:57 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:12.297 11:41:57 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:29:12.297 11:41:57 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:12.297 11:41:57 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:12.297 11:41:57 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:12.555 11:41:57 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:12.555 11:41:57 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:12.555 11:41:57 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:29:12.555 11:41:57 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:29:12.555 11:41:57 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:29:12.555 11:41:57 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:29:12.555 11:41:57 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:29:12.555 11:41:57 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:29:12.555 11:41:57 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:29:12.555 11:41:57 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:29:12.555 11:41:57 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:29:12.555 11:41:57 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:12.555 11:41:57 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:12.555 11:41:57 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:12.555 11:41:57 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:29:12.555 11:41:57 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:29:12.555 11:41:57 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:29:12.555 11:41:57 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:29:12.555 11:41:58 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:12.555 11:41:58 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:12.555 11:41:58 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:12.555 11:41:58 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:29:12.555 11:41:58 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:29:12.555 11:41:58 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:29:12.555 11:41:58 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:12.555 11:41:58 nvmf_dif -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:29:12.555 11:41:58 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:29:12.555 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:12.555 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:29:12.555 00:29:12.555 --- 10.0.0.3 ping statistics --- 00:29:12.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:12.555 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:29:12.555 11:41:58 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:29:12.555 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:29:12.555 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 00:29:12.555 00:29:12.555 --- 10.0.0.4 ping statistics --- 00:29:12.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:12.555 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:29:12.556 11:41:58 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:12.556 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:12.556 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:29:12.556 00:29:12.556 --- 10.0.0.1 ping statistics --- 00:29:12.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:12.556 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:29:12.556 11:41:58 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:29:12.556 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:12.556 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:29:12.556 00:29:12.556 --- 10.0.0.2 ping statistics --- 00:29:12.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:12.556 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:29:12.556 11:41:58 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:12.556 11:41:58 nvmf_dif -- nvmf/common.sh@461 -- # return 0 00:29:12.556 11:41:58 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:29:12.556 11:41:58 nvmf_dif -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:29:12.814 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:13.072 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:29:13.072 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:29:13.072 11:41:58 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:13.072 11:41:58 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:13.072 11:41:58 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:13.072 11:41:58 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:13.073 11:41:58 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:13.073 11:41:58 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:13.073 11:41:58 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:29:13.073 11:41:58 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:29:13.073 11:41:58 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:13.073 11:41:58 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:13.073 11:41:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:13.073 11:41:58 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=107656 00:29:13.073 11:41:58 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:29:13.073 11:41:58 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 107656 00:29:13.073 11:41:58 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 107656 ']' 00:29:13.073 11:41:58 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:13.073 11:41:58 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:13.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:13.073 11:41:58 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:13.073 11:41:58 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:13.073 11:41:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:13.073 [2024-11-20 11:41:58.550982] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:29:13.073 [2024-11-20 11:41:58.551106] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:13.331 [2024-11-20 11:41:58.707851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:13.331 [2024-11-20 11:41:58.754335] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:13.331 [2024-11-20 11:41:58.754399] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:13.331 [2024-11-20 11:41:58.754418] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:13.331 [2024-11-20 11:41:58.754428] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:13.331 [2024-11-20 11:41:58.754437] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:13.331 [2024-11-20 11:41:58.754928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:13.331 11:41:58 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:13.331 11:41:58 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:29:13.331 11:41:58 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:13.331 11:41:58 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:13.331 11:41:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:13.331 11:41:58 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:13.331 11:41:58 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:29:13.331 11:41:58 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:29:13.331 11:41:58 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.331 11:41:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:13.331 [2024-11-20 11:41:58.899329] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:13.331 11:41:58 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.331 11:41:58 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:29:13.331 11:41:58 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:13.331 11:41:58 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:13.331 11:41:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:13.331 ************************************ 00:29:13.331 START TEST fio_dif_1_default 00:29:13.331 ************************************ 00:29:13.331 11:41:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:29:13.331 11:41:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:29:13.331 11:41:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:29:13.331 11:41:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:29:13.331 11:41:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:29:13.331 11:41:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:29:13.331 11:41:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:29:13.331 11:41:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.331 11:41:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:13.590 bdev_null0 00:29:13.590 11:41:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.590 11:41:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:13.590 11:41:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.590 11:41:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:13.590 11:41:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.590 11:41:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:13.590 11:41:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.590 11:41:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:13.590 11:41:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.590 11:41:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:29:13.590 11:41:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.590 11:41:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:13.590 [2024-11-20 11:41:58.943468] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:29:13.590 11:41:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.590 11:41:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:29:13.590 11:41:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:29:13.590 11:41:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:13.590 11:41:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:29:13.590 11:41:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:13.590 11:41:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:29:13.590 11:41:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:29:13.590 11:41:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:29:13.590 11:41:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:13.590 11:41:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:29:13.590 11:41:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:29:13.590 11:41:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:13.590 11:41:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:13.590 11:41:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:29:13.590 11:41:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:29:13.590 11:41:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:13.590 { 00:29:13.590 "params": { 00:29:13.590 "name": "Nvme$subsystem", 00:29:13.590 "trtype": "$TEST_TRANSPORT", 00:29:13.590 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:13.590 "adrfam": "ipv4", 00:29:13.590 "trsvcid": "$NVMF_PORT", 00:29:13.590 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:13.590 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:13.590 "hdgst": ${hdgst:-false}, 00:29:13.590 "ddgst": ${ddgst:-false} 00:29:13.590 }, 00:29:13.590 "method": "bdev_nvme_attach_controller" 00:29:13.590 } 00:29:13.590 EOF 00:29:13.590 )") 00:29:13.590 11:41:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:29:13.590 11:41:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:29:13.590 11:41:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:29:13.590 11:41:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:29:13.590 11:41:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:29:13.590 11:41:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:13.590 11:41:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:29:13.590 11:41:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:29:13.590 11:41:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:29:13.590 11:41:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:29:13.590 11:41:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:29:13.590 11:41:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:13.590 "params": { 00:29:13.590 "name": "Nvme0", 00:29:13.590 "trtype": "tcp", 00:29:13.590 "traddr": "10.0.0.3", 00:29:13.590 "adrfam": "ipv4", 00:29:13.590 "trsvcid": "4420", 00:29:13.590 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:13.590 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:13.590 "hdgst": false, 00:29:13.590 "ddgst": false 00:29:13.590 }, 00:29:13.590 "method": "bdev_nvme_attach_controller" 00:29:13.590 }' 00:29:13.590 11:41:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:29:13.590 11:41:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:29:13.590 11:41:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:29:13.590 11:41:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:29:13.590 11:41:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:13.590 11:41:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:29:13.590 11:41:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:29:13.590 11:41:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:29:13.590 11:41:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:29:13.590 11:41:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:13.848 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:13.848 fio-3.35 00:29:13.848 Starting 1 thread 00:29:26.049 00:29:26.049 filename0: (groupid=0, jobs=1): err= 0: pid=107727: Wed Nov 20 11:42:09 2024 00:29:26.049 read: IOPS=335, BW=1340KiB/s (1372kB/s)(13.1MiB/10029msec) 00:29:26.049 slat (nsec): min=7283, max=58674, avg=9839.24, stdev=4980.05 00:29:26.049 clat (usec): min=455, max=41729, avg=11907.33, stdev=18211.13 00:29:26.049 lat (usec): min=463, max=41740, avg=11917.17, stdev=18211.02 00:29:26.049 clat percentiles (usec): 00:29:26.049 | 1.00th=[ 465], 5.00th=[ 478], 10.00th=[ 486], 20.00th=[ 502], 00:29:26.049 | 30.00th=[ 515], 40.00th=[ 553], 50.00th=[ 586], 60.00th=[ 611], 00:29:26.049 | 70.00th=[ 742], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:29:26.049 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:29:26.049 | 99.99th=[41681] 00:29:26.049 bw ( KiB/s): min= 736, max= 5152, per=100.00%, avg=1342.40, stdev=996.98, samples=20 00:29:26.049 iops : min= 184, max= 1288, avg=335.60, stdev=249.25, samples=20 00:29:26.049 lat (usec) : 500=20.06%, 750=50.12%, 1000=1.34% 00:29:26.049 lat (msec) : 2=0.39%, 10=0.12%, 50=27.98% 00:29:26.049 cpu : usr=91.97%, sys=7.51%, ctx=34, majf=0, minf=9 00:29:26.049 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:26.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:26.049 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:26.049 issued rwts: total=3360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:26.049 latency : target=0, window=0, percentile=100.00%, depth=4 00:29:26.049 00:29:26.049 Run status group 0 (all jobs): 00:29:26.049 READ: bw=1340KiB/s (1372kB/s), 1340KiB/s-1340KiB/s (1372kB/s-1372kB/s), io=13.1MiB (13.8MB), run=10029-10029msec 00:29:26.049 11:42:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:29:26.049 11:42:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:29:26.049 11:42:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:29:26.049 11:42:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:26.049 11:42:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:29:26.049 11:42:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:26.049 11:42:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.049 11:42:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:26.049 11:42:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.049 11:42:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:26.049 11:42:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.049 11:42:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:26.049 ************************************ 00:29:26.049 END TEST fio_dif_1_default 00:29:26.049 ************************************ 00:29:26.049 11:42:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.049 00:29:26.049 real 0m11.005s 00:29:26.049 user 0m9.873s 00:29:26.049 sys 0m0.999s 00:29:26.049 11:42:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:26.049 11:42:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:26.049 11:42:09 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:29:26.049 11:42:09 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:26.049 11:42:09 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:26.049 11:42:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:26.049 ************************************ 00:29:26.049 START TEST fio_dif_1_multi_subsystems 00:29:26.049 ************************************ 00:29:26.049 11:42:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:29:26.049 11:42:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:29:26.049 11:42:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:29:26.049 11:42:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:29:26.049 11:42:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:29:26.049 11:42:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:29:26.049 11:42:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:29:26.049 11:42:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:29:26.049 11:42:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.049 11:42:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:26.049 bdev_null0 00:29:26.049 11:42:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.049 11:42:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:26.049 11:42:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.049 11:42:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:26.049 11:42:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.049 11:42:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:26.049 11:42:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.049 11:42:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:26.049 11:42:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.049 11:42:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:29:26.049 11:42:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.049 11:42:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:26.049 [2024-11-20 11:42:10.000392] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:29:26.049 11:42:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.049 11:42:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:29:26.049 11:42:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:29:26.049 11:42:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:29:26.049 11:42:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:29:26.049 11:42:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.049 11:42:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:26.049 bdev_null1 00:29:26.049 11:42:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.049 11:42:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:29:26.049 11:42:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.049 11:42:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:26.049 11:42:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.050 11:42:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:29:26.050 11:42:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.050 11:42:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:26.050 11:42:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.050 11:42:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:29:26.050 11:42:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.050 11:42:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:26.050 11:42:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.050 11:42:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:29:26.050 11:42:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:29:26.050 11:42:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:29:26.050 11:42:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:29:26.050 11:42:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:29:26.050 11:42:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:26.050 11:42:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:26.050 11:42:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:26.050 { 00:29:26.050 "params": { 00:29:26.050 "name": "Nvme$subsystem", 00:29:26.050 "trtype": "$TEST_TRANSPORT", 00:29:26.050 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:26.050 "adrfam": "ipv4", 00:29:26.050 "trsvcid": "$NVMF_PORT", 00:29:26.050 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:26.050 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:26.050 "hdgst": ${hdgst:-false}, 00:29:26.050 "ddgst": ${ddgst:-false} 00:29:26.050 }, 00:29:26.050 "method": "bdev_nvme_attach_controller" 00:29:26.050 } 00:29:26.050 EOF 00:29:26.050 )") 00:29:26.050 11:42:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:26.050 11:42:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:29:26.050 11:42:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:29:26.050 11:42:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:26.050 11:42:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:29:26.050 11:42:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:26.050 11:42:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:29:26.050 11:42:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:29:26.050 11:42:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:29:26.050 11:42:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:29:26.050 11:42:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:29:26.050 11:42:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:29:26.050 11:42:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:29:26.050 11:42:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:26.050 11:42:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:29:26.050 11:42:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:26.050 11:42:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:29:26.050 11:42:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:26.050 { 00:29:26.050 "params": { 00:29:26.050 "name": "Nvme$subsystem", 00:29:26.050 "trtype": "$TEST_TRANSPORT", 00:29:26.050 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:26.050 "adrfam": "ipv4", 00:29:26.050 "trsvcid": "$NVMF_PORT", 00:29:26.050 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:26.050 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:26.050 "hdgst": ${hdgst:-false}, 00:29:26.050 "ddgst": ${ddgst:-false} 00:29:26.050 }, 00:29:26.050 "method": "bdev_nvme_attach_controller" 00:29:26.050 } 00:29:26.050 EOF 00:29:26.050 )") 00:29:26.050 11:42:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:29:26.050 11:42:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:29:26.050 11:42:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:29:26.050 11:42:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:29:26.050 11:42:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:29:26.050 11:42:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:29:26.050 11:42:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:29:26.050 11:42:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:26.050 "params": { 00:29:26.050 "name": "Nvme0", 00:29:26.050 "trtype": "tcp", 00:29:26.050 "traddr": "10.0.0.3", 00:29:26.050 "adrfam": "ipv4", 00:29:26.050 "trsvcid": "4420", 00:29:26.050 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:26.050 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:26.050 "hdgst": false, 00:29:26.050 "ddgst": false 00:29:26.050 }, 00:29:26.050 "method": "bdev_nvme_attach_controller" 00:29:26.050 },{ 00:29:26.050 "params": { 00:29:26.050 "name": "Nvme1", 00:29:26.050 "trtype": "tcp", 00:29:26.050 "traddr": "10.0.0.3", 00:29:26.050 "adrfam": "ipv4", 00:29:26.050 "trsvcid": "4420", 00:29:26.050 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:26.050 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:26.050 "hdgst": false, 00:29:26.050 "ddgst": false 00:29:26.050 }, 00:29:26.050 "method": "bdev_nvme_attach_controller" 00:29:26.050 }' 00:29:26.050 11:42:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:29:26.050 11:42:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:29:26.050 11:42:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:29:26.050 11:42:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:26.050 11:42:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:29:26.050 11:42:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:29:26.050 11:42:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:29:26.050 11:42:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:29:26.050 11:42:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:29:26.050 11:42:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:26.050 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:26.050 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:26.050 fio-3.35 00:29:26.050 Starting 2 threads 00:29:36.020 00:29:36.020 filename0: (groupid=0, jobs=1): err= 0: pid=107883: Wed Nov 20 11:42:20 2024 00:29:36.020 read: IOPS=132, BW=529KiB/s (542kB/s)(5296KiB/10004msec) 00:29:36.020 slat (nsec): min=7785, max=43869, avg=11065.71, stdev=6251.08 00:29:36.020 clat (usec): min=443, max=42869, avg=30184.90, stdev=17976.31 00:29:36.020 lat (usec): min=451, max=42894, avg=30195.96, stdev=17976.15 00:29:36.020 clat percentiles (usec): 00:29:36.020 | 1.00th=[ 465], 5.00th=[ 482], 10.00th=[ 498], 20.00th=[ 545], 00:29:36.020 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:29:36.020 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:29:36.020 | 99.00th=[41681], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:29:36.020 | 99.99th=[42730] 00:29:36.020 bw ( KiB/s): min= 384, max= 704, per=48.41%, avg=523.79, stdev=77.07, samples=19 00:29:36.020 iops : min= 96, max= 176, avg=130.95, stdev=19.27, samples=19 00:29:36.020 lat (usec) : 500=11.48%, 750=12.39%, 1000=2.72% 00:29:36.020 lat (msec) : 2=0.30%, 50=73.11% 00:29:36.020 cpu : usr=95.28%, sys=4.27%, ctx=10, majf=0, minf=9 00:29:36.020 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:36.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:36.020 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:36.020 issued rwts: total=1324,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:36.020 latency : target=0, window=0, percentile=100.00%, depth=4 00:29:36.020 filename1: (groupid=0, jobs=1): err= 0: pid=107884: Wed Nov 20 11:42:20 2024 00:29:36.020 read: IOPS=137, BW=551KiB/s (565kB/s)(5520KiB/10011msec) 00:29:36.020 slat (nsec): min=7835, max=47212, avg=11931.67, stdev=7275.66 00:29:36.020 clat (usec): min=455, max=41957, avg=28974.92, stdev=18545.82 00:29:36.020 lat (usec): min=466, max=41992, avg=28986.85, stdev=18545.68 00:29:36.020 clat percentiles (usec): 00:29:36.020 | 1.00th=[ 474], 5.00th=[ 490], 10.00th=[ 506], 20.00th=[ 529], 00:29:36.020 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:29:36.020 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:29:36.021 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:29:36.021 | 99.99th=[42206] 00:29:36.021 bw ( KiB/s): min= 416, max= 928, per=50.91%, avg=550.40, stdev=113.07, samples=20 00:29:36.021 iops : min= 104, max= 232, avg=137.60, stdev=28.27, samples=20 00:29:36.021 lat (usec) : 500=8.19%, 750=18.48%, 1000=2.90% 00:29:36.021 lat (msec) : 2=0.29%, 50=70.14% 00:29:36.021 cpu : usr=95.52%, sys=4.01%, ctx=17, majf=0, minf=0 00:29:36.021 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:36.021 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:36.021 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:36.021 issued rwts: total=1380,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:36.021 latency : target=0, window=0, percentile=100.00%, depth=4 00:29:36.021 00:29:36.021 Run status group 0 (all jobs): 00:29:36.021 READ: bw=1080KiB/s (1106kB/s), 529KiB/s-551KiB/s (542kB/s-565kB/s), io=10.6MiB (11.1MB), run=10004-10011msec 00:29:36.021 11:42:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:29:36.021 11:42:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:29:36.021 11:42:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:29:36.021 11:42:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:36.021 11:42:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:29:36.021 11:42:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:36.021 11:42:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.021 11:42:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:36.021 11:42:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.021 11:42:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:36.021 11:42:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.021 11:42:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:36.021 11:42:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.021 11:42:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:29:36.021 11:42:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:29:36.021 11:42:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:29:36.021 11:42:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:36.021 11:42:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.021 11:42:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:36.021 11:42:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.021 11:42:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:29:36.021 11:42:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.021 11:42:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:36.021 ************************************ 00:29:36.021 END TEST fio_dif_1_multi_subsystems 00:29:36.021 ************************************ 00:29:36.021 11:42:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.021 00:29:36.021 real 0m11.158s 00:29:36.021 user 0m19.896s 00:29:36.021 sys 0m1.084s 00:29:36.021 11:42:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:36.021 11:42:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:36.021 11:42:21 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:29:36.021 11:42:21 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:36.021 11:42:21 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:36.021 11:42:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:36.021 ************************************ 00:29:36.021 START TEST fio_dif_rand_params 00:29:36.021 ************************************ 00:29:36.021 11:42:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:29:36.021 11:42:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:29:36.021 11:42:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:29:36.021 11:42:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:29:36.021 11:42:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:29:36.021 11:42:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:29:36.021 11:42:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:29:36.021 11:42:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:29:36.021 11:42:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:29:36.021 11:42:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:29:36.021 11:42:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:36.021 11:42:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:29:36.021 11:42:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:29:36.021 11:42:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:29:36.021 11:42:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.021 11:42:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:36.021 bdev_null0 00:29:36.021 11:42:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.021 11:42:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:36.021 11:42:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.021 11:42:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:36.021 11:42:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.021 11:42:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:36.021 11:42:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.021 11:42:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:36.021 11:42:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.021 11:42:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:29:36.021 11:42:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.021 11:42:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:36.021 [2024-11-20 11:42:21.211601] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:29:36.021 11:42:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.021 11:42:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:29:36.021 11:42:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:29:36.021 11:42:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:29:36.021 11:42:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:29:36.021 11:42:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:29:36.021 11:42:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:36.021 11:42:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:36.021 { 00:29:36.021 "params": { 00:29:36.021 "name": "Nvme$subsystem", 00:29:36.021 "trtype": "$TEST_TRANSPORT", 00:29:36.021 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:36.021 "adrfam": "ipv4", 00:29:36.021 "trsvcid": "$NVMF_PORT", 00:29:36.021 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:36.021 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:36.021 "hdgst": ${hdgst:-false}, 00:29:36.021 "ddgst": ${ddgst:-false} 00:29:36.021 }, 00:29:36.021 "method": "bdev_nvme_attach_controller" 00:29:36.021 } 00:29:36.021 EOF 00:29:36.021 )") 00:29:36.021 11:42:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:36.022 11:42:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:36.022 11:42:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:29:36.022 11:42:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:29:36.022 11:42:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:36.022 11:42:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:29:36.022 11:42:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:29:36.022 11:42:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:29:36.022 11:42:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:29:36.022 11:42:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:36.022 11:42:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:29:36.022 11:42:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:29:36.022 11:42:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:29:36.022 11:42:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:36.022 11:42:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:29:36.022 11:42:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:29:36.022 11:42:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:36.022 11:42:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:29:36.022 11:42:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:29:36.022 11:42:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:29:36.022 11:42:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:36.022 "params": { 00:29:36.022 "name": "Nvme0", 00:29:36.022 "trtype": "tcp", 00:29:36.022 "traddr": "10.0.0.3", 00:29:36.022 "adrfam": "ipv4", 00:29:36.022 "trsvcid": "4420", 00:29:36.022 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:36.022 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:36.022 "hdgst": false, 00:29:36.022 "ddgst": false 00:29:36.022 }, 00:29:36.022 "method": "bdev_nvme_attach_controller" 00:29:36.022 }' 00:29:36.022 11:42:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:29:36.022 11:42:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:29:36.022 11:42:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:29:36.022 11:42:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:36.022 11:42:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:29:36.022 11:42:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:29:36.022 11:42:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:29:36.022 11:42:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:29:36.022 11:42:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:29:36.022 11:42:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:36.022 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:29:36.022 ... 00:29:36.022 fio-3.35 00:29:36.022 Starting 3 threads 00:29:42.585 00:29:42.585 filename0: (groupid=0, jobs=1): err= 0: pid=108038: Wed Nov 20 11:42:26 2024 00:29:42.585 read: IOPS=236, BW=29.6MiB/s (31.0MB/s)(148MiB/5007msec) 00:29:42.585 slat (nsec): min=7844, max=33302, avg=12137.99, stdev=3215.65 00:29:42.585 clat (usec): min=4501, max=54948, avg=12665.83, stdev=6273.68 00:29:42.585 lat (usec): min=4512, max=54959, avg=12677.97, stdev=6273.78 00:29:42.585 clat percentiles (usec): 00:29:42.585 | 1.00th=[ 6521], 5.00th=[ 7635], 10.00th=[ 8094], 20.00th=[10814], 00:29:42.585 | 30.00th=[11600], 40.00th=[12125], 50.00th=[12387], 60.00th=[12649], 00:29:42.585 | 70.00th=[12911], 80.00th=[13173], 90.00th=[13698], 95.00th=[14222], 00:29:42.585 | 99.00th=[52167], 99.50th=[54789], 99.90th=[54789], 99.95th=[54789], 00:29:42.585 | 99.99th=[54789] 00:29:42.585 bw ( KiB/s): min=24576, max=35328, per=33.02%, avg=30233.60, stdev=2699.69, samples=10 00:29:42.585 iops : min= 192, max= 276, avg=236.20, stdev=21.09, samples=10 00:29:42.585 lat (msec) : 10=15.79%, 20=81.93%, 50=0.59%, 100=1.69% 00:29:42.585 cpu : usr=93.53%, sys=5.11%, ctx=8, majf=0, minf=0 00:29:42.585 IO depths : 1=7.2%, 2=92.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:42.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:42.585 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:42.585 issued rwts: total=1184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:42.585 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:42.585 filename0: (groupid=0, jobs=1): err= 0: pid=108039: Wed Nov 20 11:42:26 2024 00:29:42.585 read: IOPS=255, BW=31.9MiB/s (33.4MB/s)(160MiB/5003msec) 00:29:42.585 slat (nsec): min=5026, max=46732, avg=13422.03, stdev=3579.29 00:29:42.585 clat (usec): min=4658, max=55524, avg=11742.29, stdev=7065.61 00:29:42.585 lat (usec): min=4663, max=55537, avg=11755.72, stdev=7065.46 00:29:42.585 clat percentiles (usec): 00:29:42.585 | 1.00th=[ 7046], 5.00th=[ 8094], 10.00th=[ 9110], 20.00th=[ 9765], 00:29:42.585 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10683], 60.00th=[10945], 00:29:42.585 | 70.00th=[11207], 80.00th=[11469], 90.00th=[11863], 95.00th=[12256], 00:29:42.585 | 99.00th=[51643], 99.50th=[51643], 99.90th=[53740], 99.95th=[55313], 00:29:42.585 | 99.99th=[55313] 00:29:42.585 bw ( KiB/s): min=25344, max=36096, per=35.29%, avg=32312.89, stdev=4065.44, samples=9 00:29:42.585 iops : min= 198, max= 282, avg=252.44, stdev=31.76, samples=9 00:29:42.585 lat (msec) : 10=24.06%, 20=72.88%, 50=0.55%, 100=2.51% 00:29:42.585 cpu : usr=92.46%, sys=6.02%, ctx=8, majf=0, minf=9 00:29:42.585 IO depths : 1=1.4%, 2=98.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:42.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:42.585 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:42.585 issued rwts: total=1276,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:42.585 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:42.585 filename0: (groupid=0, jobs=1): err= 0: pid=108040: Wed Nov 20 11:42:26 2024 00:29:42.585 read: IOPS=224, BW=28.0MiB/s (29.4MB/s)(140MiB/5007msec) 00:29:42.585 slat (nsec): min=7850, max=36006, avg=10941.89, stdev=4159.24 00:29:42.585 clat (usec): min=4257, max=46912, avg=13359.82, stdev=3421.96 00:29:42.585 lat (usec): min=4265, max=46920, avg=13370.76, stdev=3421.80 00:29:42.585 clat percentiles (usec): 00:29:42.585 | 1.00th=[ 4293], 5.00th=[ 8455], 10.00th=[ 8979], 20.00th=[ 9896], 00:29:42.585 | 30.00th=[13829], 40.00th=[14222], 50.00th=[14484], 60.00th=[14746], 00:29:42.585 | 70.00th=[15008], 80.00th=[15270], 90.00th=[15664], 95.00th=[16057], 00:29:42.585 | 99.00th=[16581], 99.50th=[16909], 99.90th=[45351], 99.95th=[46924], 00:29:42.585 | 99.99th=[46924] 00:29:42.585 bw ( KiB/s): min=25344, max=34560, per=31.28%, avg=28646.40, stdev=3319.13, samples=10 00:29:42.585 iops : min= 198, max= 270, avg=223.80, stdev=25.93, samples=10 00:29:42.585 lat (msec) : 10=20.23%, 20=79.50%, 50=0.27% 00:29:42.585 cpu : usr=92.45%, sys=6.21%, ctx=10, majf=0, minf=0 00:29:42.585 IO depths : 1=30.7%, 2=69.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:42.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:42.585 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:42.585 issued rwts: total=1122,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:42.585 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:42.585 00:29:42.585 Run status group 0 (all jobs): 00:29:42.585 READ: bw=89.4MiB/s (93.8MB/s), 28.0MiB/s-31.9MiB/s (29.4MB/s-33.4MB/s), io=448MiB (469MB), run=5003-5007msec 00:29:42.585 11:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:29:42.585 11:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:29:42.585 11:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:42.585 11:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:42.585 11:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:29:42.585 11:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:42.585 11:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.585 11:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:42.585 11:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.585 11:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:42.585 11:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.585 11:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:42.585 11:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.585 11:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:29:42.585 11:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:29:42.585 11:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:29:42.585 11:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:29:42.585 11:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:29:42.585 11:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:29:42.585 11:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:29:42.585 11:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:29:42.585 11:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:42.585 11:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:29:42.585 11:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:29:42.585 11:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:29:42.585 11:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.585 11:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:42.585 bdev_null0 00:29:42.585 11:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.585 11:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:42.585 11:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.585 11:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:42.585 11:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.585 11:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:42.585 11:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.585 11:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:42.585 11:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.585 11:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:29:42.585 11:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.585 11:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:42.585 [2024-11-20 11:42:27.175336] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:29:42.585 11:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.585 11:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:42.585 11:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:29:42.586 11:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:29:42.586 11:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:29:42.586 11:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.586 11:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:42.586 bdev_null1 00:29:42.586 11:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.586 11:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:29:42.586 11:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.586 11:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:42.586 11:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.586 11:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:29:42.586 11:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.586 11:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:42.586 11:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.586 11:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:29:42.586 11:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.586 11:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:42.586 11:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.586 11:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:42.586 11:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:29:42.586 11:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:29:42.586 11:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:29:42.586 11:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.586 11:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:42.586 bdev_null2 00:29:42.586 11:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.586 11:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:29:42.586 11:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.586 11:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:42.586 11:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.586 11:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:29:42.586 11:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.586 11:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:42.586 11:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.586 11:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:29:42.586 11:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.586 11:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:42.586 11:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.586 11:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:29:42.586 11:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:29:42.586 11:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:29:42.586 11:42:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:29:42.586 11:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:29:42.586 11:42:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:29:42.586 11:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:42.586 11:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:29:42.586 11:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:29:42.586 11:42:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:42.586 11:42:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:42.586 { 00:29:42.586 "params": { 00:29:42.586 "name": "Nvme$subsystem", 00:29:42.586 "trtype": "$TEST_TRANSPORT", 00:29:42.586 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:42.586 "adrfam": "ipv4", 00:29:42.586 "trsvcid": "$NVMF_PORT", 00:29:42.586 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:42.586 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:42.586 "hdgst": ${hdgst:-false}, 00:29:42.586 "ddgst": ${ddgst:-false} 00:29:42.586 }, 00:29:42.586 "method": "bdev_nvme_attach_controller" 00:29:42.586 } 00:29:42.586 EOF 00:29:42.586 )") 00:29:42.586 11:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:42.586 11:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:29:42.586 11:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:42.586 11:42:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:29:42.586 11:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:29:42.586 11:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:42.586 11:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:29:42.586 11:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:29:42.586 11:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:42.586 11:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:29:42.586 11:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:29:42.586 11:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:29:42.586 11:42:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:42.586 11:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:29:42.586 11:42:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:42.586 { 00:29:42.586 "params": { 00:29:42.586 "name": "Nvme$subsystem", 00:29:42.586 "trtype": "$TEST_TRANSPORT", 00:29:42.586 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:42.586 "adrfam": "ipv4", 00:29:42.586 "trsvcid": "$NVMF_PORT", 00:29:42.586 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:42.586 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:42.586 "hdgst": ${hdgst:-false}, 00:29:42.586 "ddgst": ${ddgst:-false} 00:29:42.586 }, 00:29:42.586 "method": "bdev_nvme_attach_controller" 00:29:42.586 } 00:29:42.586 EOF 00:29:42.586 )") 00:29:42.586 11:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:42.586 11:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:42.586 11:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:29:42.586 11:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:29:42.586 11:42:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:29:42.586 11:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:29:42.586 11:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:29:42.586 11:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:42.586 11:42:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:42.586 11:42:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:42.586 { 00:29:42.586 "params": { 00:29:42.586 "name": "Nvme$subsystem", 00:29:42.586 "trtype": "$TEST_TRANSPORT", 00:29:42.586 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:42.586 "adrfam": "ipv4", 00:29:42.586 "trsvcid": "$NVMF_PORT", 00:29:42.586 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:42.586 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:42.586 "hdgst": ${hdgst:-false}, 00:29:42.586 "ddgst": ${ddgst:-false} 00:29:42.586 }, 00:29:42.586 "method": "bdev_nvme_attach_controller" 00:29:42.586 } 00:29:42.586 EOF 00:29:42.586 )") 00:29:42.586 11:42:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:29:42.586 11:42:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:29:42.586 11:42:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:29:42.586 11:42:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:42.586 "params": { 00:29:42.586 "name": "Nvme0", 00:29:42.586 "trtype": "tcp", 00:29:42.586 "traddr": "10.0.0.3", 00:29:42.586 "adrfam": "ipv4", 00:29:42.586 "trsvcid": "4420", 00:29:42.586 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:42.586 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:42.586 "hdgst": false, 00:29:42.586 "ddgst": false 00:29:42.586 }, 00:29:42.586 "method": "bdev_nvme_attach_controller" 00:29:42.586 },{ 00:29:42.586 "params": { 00:29:42.586 "name": "Nvme1", 00:29:42.586 "trtype": "tcp", 00:29:42.586 "traddr": "10.0.0.3", 00:29:42.586 "adrfam": "ipv4", 00:29:42.586 "trsvcid": "4420", 00:29:42.586 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:42.586 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:42.586 "hdgst": false, 00:29:42.586 "ddgst": false 00:29:42.586 }, 00:29:42.586 "method": "bdev_nvme_attach_controller" 00:29:42.586 },{ 00:29:42.586 "params": { 00:29:42.586 "name": "Nvme2", 00:29:42.586 "trtype": "tcp", 00:29:42.586 "traddr": "10.0.0.3", 00:29:42.586 "adrfam": "ipv4", 00:29:42.586 "trsvcid": "4420", 00:29:42.586 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:42.586 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:42.586 "hdgst": false, 00:29:42.586 "ddgst": false 00:29:42.586 }, 00:29:42.586 "method": "bdev_nvme_attach_controller" 00:29:42.586 }' 00:29:42.586 11:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:29:42.587 11:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:29:42.587 11:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:29:42.587 11:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:42.587 11:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:29:42.587 11:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:29:42.587 11:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:29:42.587 11:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:29:42.587 11:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:29:42.587 11:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:42.587 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:29:42.587 ... 00:29:42.587 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:29:42.587 ... 00:29:42.587 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:29:42.587 ... 00:29:42.587 fio-3.35 00:29:42.587 Starting 24 threads 00:29:54.811 00:29:54.811 filename0: (groupid=0, jobs=1): err= 0: pid=108131: Wed Nov 20 11:42:38 2024 00:29:54.811 read: IOPS=171, BW=685KiB/s (702kB/s)(6872KiB/10027msec) 00:29:54.811 slat (usec): min=5, max=8046, avg=20.70, stdev=273.72 00:29:54.811 clat (msec): min=36, max=263, avg=93.17, stdev=30.13 00:29:54.811 lat (msec): min=36, max=263, avg=93.19, stdev=30.14 00:29:54.811 clat percentiles (msec): 00:29:54.811 | 1.00th=[ 44], 5.00th=[ 58], 10.00th=[ 61], 20.00th=[ 72], 00:29:54.811 | 30.00th=[ 74], 40.00th=[ 83], 50.00th=[ 85], 60.00th=[ 96], 00:29:54.811 | 70.00th=[ 107], 80.00th=[ 112], 90.00th=[ 130], 95.00th=[ 144], 00:29:54.811 | 99.00th=[ 201], 99.50th=[ 264], 99.90th=[ 264], 99.95th=[ 264], 00:29:54.811 | 99.99th=[ 264] 00:29:54.811 bw ( KiB/s): min= 384, max= 944, per=3.72%, avg=680.80, stdev=151.01, samples=20 00:29:54.811 iops : min= 96, max= 236, avg=170.20, stdev=37.75, samples=20 00:29:54.811 lat (msec) : 50=3.73%, 100=63.39%, 250=32.31%, 500=0.58% 00:29:54.811 cpu : usr=32.02%, sys=1.01%, ctx=934, majf=0, minf=9 00:29:54.811 IO depths : 1=1.9%, 2=4.8%, 4=15.4%, 8=66.5%, 16=11.4%, 32=0.0%, >=64=0.0% 00:29:54.811 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:54.811 complete : 0=0.0%, 4=91.3%, 8=3.8%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:54.811 issued rwts: total=1718,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:54.811 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:54.811 filename0: (groupid=0, jobs=1): err= 0: pid=108132: Wed Nov 20 11:42:38 2024 00:29:54.811 read: IOPS=178, BW=713KiB/s (730kB/s)(7136KiB/10007msec) 00:29:54.811 slat (nsec): min=3717, max=51915, avg=12299.22, stdev=5381.19 00:29:54.811 clat (msec): min=6, max=234, avg=89.61, stdev=30.55 00:29:54.811 lat (msec): min=6, max=234, avg=89.62, stdev=30.55 00:29:54.811 clat percentiles (msec): 00:29:54.811 | 1.00th=[ 28], 5.00th=[ 50], 10.00th=[ 60], 20.00th=[ 71], 00:29:54.811 | 30.00th=[ 74], 40.00th=[ 78], 50.00th=[ 84], 60.00th=[ 91], 00:29:54.811 | 70.00th=[ 104], 80.00th=[ 111], 90.00th=[ 126], 95.00th=[ 140], 00:29:54.811 | 99.00th=[ 222], 99.50th=[ 234], 99.90th=[ 234], 99.95th=[ 234], 00:29:54.811 | 99.99th=[ 234] 00:29:54.811 bw ( KiB/s): min= 512, max= 944, per=3.81%, avg=697.32, stdev=113.27, samples=19 00:29:54.811 iops : min= 128, max= 236, avg=174.32, stdev=28.30, samples=19 00:29:54.811 lat (msec) : 10=0.90%, 50=4.43%, 100=64.01%, 250=30.66% 00:29:54.811 cpu : usr=42.94%, sys=1.22%, ctx=1394, majf=0, minf=9 00:29:54.811 IO depths : 1=2.2%, 2=4.8%, 4=14.3%, 8=67.8%, 16=10.8%, 32=0.0%, >=64=0.0% 00:29:54.811 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:54.811 complete : 0=0.0%, 4=90.8%, 8=4.0%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:54.811 issued rwts: total=1784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:54.811 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:54.811 filename0: (groupid=0, jobs=1): err= 0: pid=108133: Wed Nov 20 11:42:38 2024 00:29:54.811 read: IOPS=193, BW=774KiB/s (793kB/s)(7760KiB/10023msec) 00:29:54.811 slat (usec): min=3, max=4022, avg=13.43, stdev=91.19 00:29:54.811 clat (msec): min=28, max=200, avg=82.49, stdev=30.14 00:29:54.811 lat (msec): min=28, max=200, avg=82.51, stdev=30.14 00:29:54.811 clat percentiles (msec): 00:29:54.811 | 1.00th=[ 38], 5.00th=[ 47], 10.00th=[ 50], 20.00th=[ 55], 00:29:54.811 | 30.00th=[ 66], 40.00th=[ 71], 50.00th=[ 77], 60.00th=[ 83], 00:29:54.811 | 70.00th=[ 95], 80.00th=[ 107], 90.00th=[ 120], 95.00th=[ 132], 00:29:54.811 | 99.00th=[ 194], 99.50th=[ 201], 99.90th=[ 201], 99.95th=[ 201], 00:29:54.811 | 99.99th=[ 201] 00:29:54.811 bw ( KiB/s): min= 512, max= 1152, per=4.23%, avg=772.05, stdev=215.98, samples=20 00:29:54.811 iops : min= 128, max= 288, avg=193.00, stdev=54.00, samples=20 00:29:54.811 lat (msec) : 50=12.37%, 100=60.41%, 250=27.22% 00:29:54.811 cpu : usr=43.54%, sys=1.20%, ctx=1605, majf=0, minf=9 00:29:54.811 IO depths : 1=2.3%, 2=4.9%, 4=13.7%, 8=68.3%, 16=10.8%, 32=0.0%, >=64=0.0% 00:29:54.811 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:54.811 complete : 0=0.0%, 4=90.8%, 8=4.1%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:54.811 issued rwts: total=1940,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:54.811 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:54.811 filename0: (groupid=0, jobs=1): err= 0: pid=108134: Wed Nov 20 11:42:38 2024 00:29:54.811 read: IOPS=204, BW=818KiB/s (837kB/s)(8208KiB/10036msec) 00:29:54.811 slat (usec): min=7, max=8037, avg=15.27, stdev=177.26 00:29:54.811 clat (msec): min=28, max=188, avg=78.10, stdev=27.47 00:29:54.811 lat (msec): min=28, max=188, avg=78.12, stdev=27.47 00:29:54.811 clat percentiles (msec): 00:29:54.811 | 1.00th=[ 35], 5.00th=[ 39], 10.00th=[ 48], 20.00th=[ 59], 00:29:54.811 | 30.00th=[ 61], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 83], 00:29:54.811 | 70.00th=[ 86], 80.00th=[ 97], 90.00th=[ 118], 95.00th=[ 132], 00:29:54.811 | 99.00th=[ 159], 99.50th=[ 184], 99.90th=[ 188], 99.95th=[ 188], 00:29:54.811 | 99.99th=[ 188] 00:29:54.811 bw ( KiB/s): min= 560, max= 1088, per=4.45%, avg=814.40, stdev=157.85, samples=20 00:29:54.811 iops : min= 140, max= 272, avg=203.60, stdev=39.46, samples=20 00:29:54.811 lat (msec) : 50=15.79%, 100=66.76%, 250=17.45% 00:29:54.811 cpu : usr=32.29%, sys=0.86%, ctx=901, majf=0, minf=9 00:29:54.811 IO depths : 1=0.9%, 2=2.1%, 4=9.2%, 8=74.9%, 16=12.9%, 32=0.0%, >=64=0.0% 00:29:54.811 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:54.811 complete : 0=0.0%, 4=89.8%, 8=5.9%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:54.811 issued rwts: total=2052,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:54.811 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:54.811 filename0: (groupid=0, jobs=1): err= 0: pid=108135: Wed Nov 20 11:42:38 2024 00:29:54.811 read: IOPS=188, BW=755KiB/s (773kB/s)(7564KiB/10014msec) 00:29:54.811 slat (usec): min=4, max=8021, avg=20.68, stdev=260.39 00:29:54.811 clat (msec): min=22, max=194, avg=84.59, stdev=30.65 00:29:54.811 lat (msec): min=22, max=194, avg=84.62, stdev=30.66 00:29:54.811 clat percentiles (msec): 00:29:54.811 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 51], 20.00th=[ 60], 00:29:54.811 | 30.00th=[ 65], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 86], 00:29:54.811 | 70.00th=[ 96], 80.00th=[ 109], 90.00th=[ 125], 95.00th=[ 142], 00:29:54.811 | 99.00th=[ 190], 99.50th=[ 192], 99.90th=[ 194], 99.95th=[ 194], 00:29:54.811 | 99.99th=[ 194] 00:29:54.811 bw ( KiB/s): min= 384, max= 1072, per=4.12%, avg=752.05, stdev=185.23, samples=20 00:29:54.811 iops : min= 96, max= 268, avg=188.00, stdev=46.31, samples=20 00:29:54.811 lat (msec) : 50=10.58%, 100=61.77%, 250=27.66% 00:29:54.811 cpu : usr=37.00%, sys=1.14%, ctx=1077, majf=0, minf=9 00:29:54.811 IO depths : 1=1.7%, 2=3.9%, 4=11.8%, 8=71.0%, 16=11.5%, 32=0.0%, >=64=0.0% 00:29:54.811 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:54.811 complete : 0=0.0%, 4=90.6%, 8=4.6%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:54.811 issued rwts: total=1891,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:54.811 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:54.811 filename0: (groupid=0, jobs=1): err= 0: pid=108136: Wed Nov 20 11:42:38 2024 00:29:54.811 read: IOPS=209, BW=839KiB/s (859kB/s)(8424KiB/10041msec) 00:29:54.811 slat (usec): min=3, max=8039, avg=21.22, stdev=262.33 00:29:54.811 clat (msec): min=26, max=188, avg=76.07, stdev=27.05 00:29:54.811 lat (msec): min=26, max=188, avg=76.09, stdev=27.06 00:29:54.811 clat percentiles (msec): 00:29:54.811 | 1.00th=[ 32], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 55], 00:29:54.811 | 30.00th=[ 60], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 78], 00:29:54.811 | 70.00th=[ 85], 80.00th=[ 95], 90.00th=[ 114], 95.00th=[ 129], 00:29:54.811 | 99.00th=[ 167], 99.50th=[ 182], 99.90th=[ 188], 99.95th=[ 188], 00:29:54.811 | 99.99th=[ 188] 00:29:54.811 bw ( KiB/s): min= 560, max= 1138, per=4.58%, avg=836.35, stdev=151.17, samples=20 00:29:54.811 iops : min= 140, max= 284, avg=209.00, stdev=37.73, samples=20 00:29:54.811 lat (msec) : 50=17.24%, 100=66.76%, 250=16.00% 00:29:54.811 cpu : usr=41.48%, sys=1.23%, ctx=1236, majf=0, minf=9 00:29:54.811 IO depths : 1=0.9%, 2=2.3%, 4=9.4%, 8=74.4%, 16=12.9%, 32=0.0%, >=64=0.0% 00:29:54.811 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:54.811 complete : 0=0.0%, 4=89.9%, 8=5.8%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:54.811 issued rwts: total=2106,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:54.811 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:54.811 filename0: (groupid=0, jobs=1): err= 0: pid=108137: Wed Nov 20 11:42:38 2024 00:29:54.811 read: IOPS=235, BW=943KiB/s (966kB/s)(9488KiB/10060msec) 00:29:54.811 slat (usec): min=4, max=8032, avg=16.92, stdev=193.36 00:29:54.811 clat (usec): min=1739, max=187788, avg=67750.52, stdev=28520.14 00:29:54.811 lat (usec): min=1748, max=187827, avg=67767.44, stdev=28527.29 00:29:54.811 clat percentiles (msec): 00:29:54.811 | 1.00th=[ 3], 5.00th=[ 13], 10.00th=[ 42], 20.00th=[ 48], 00:29:54.811 | 30.00th=[ 55], 40.00th=[ 59], 50.00th=[ 64], 60.00th=[ 71], 00:29:54.811 | 70.00th=[ 81], 80.00th=[ 86], 90.00th=[ 105], 95.00th=[ 118], 00:29:54.811 | 99.00th=[ 157], 99.50th=[ 182], 99.90th=[ 188], 99.95th=[ 188], 00:29:54.811 | 99.99th=[ 188] 00:29:54.811 bw ( KiB/s): min= 536, max= 1936, per=5.16%, avg=942.40, stdev=294.74, samples=20 00:29:54.811 iops : min= 134, max= 484, avg=235.60, stdev=73.68, samples=20 00:29:54.811 lat (msec) : 2=0.67%, 4=1.35%, 10=2.70%, 20=0.67%, 50=19.94% 00:29:54.811 lat (msec) : 100=63.70%, 250=10.96% 00:29:54.811 cpu : usr=41.75%, sys=1.22%, ctx=1312, majf=0, minf=9 00:29:54.811 IO depths : 1=0.6%, 2=1.3%, 4=7.2%, 8=77.7%, 16=13.2%, 32=0.0%, >=64=0.0% 00:29:54.811 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:54.811 complete : 0=0.0%, 4=89.5%, 8=6.2%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:54.811 issued rwts: total=2372,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:54.811 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:54.811 filename0: (groupid=0, jobs=1): err= 0: pid=108138: Wed Nov 20 11:42:38 2024 00:29:54.811 read: IOPS=197, BW=791KiB/s (809kB/s)(7928KiB/10029msec) 00:29:54.811 slat (usec): min=7, max=3294, avg=14.29, stdev=73.95 00:29:54.811 clat (msec): min=22, max=265, avg=80.84, stdev=36.84 00:29:54.812 lat (msec): min=22, max=265, avg=80.85, stdev=36.84 00:29:54.812 clat percentiles (msec): 00:29:54.812 | 1.00th=[ 32], 5.00th=[ 39], 10.00th=[ 48], 20.00th=[ 56], 00:29:54.812 | 30.00th=[ 62], 40.00th=[ 66], 50.00th=[ 72], 60.00th=[ 77], 00:29:54.812 | 70.00th=[ 84], 80.00th=[ 106], 90.00th=[ 136], 95.00th=[ 155], 00:29:54.812 | 99.00th=[ 184], 99.50th=[ 266], 99.90th=[ 266], 99.95th=[ 266], 00:29:54.812 | 99.99th=[ 266] 00:29:54.812 bw ( KiB/s): min= 384, max= 1282, per=4.30%, avg=786.70, stdev=248.37, samples=20 00:29:54.812 iops : min= 96, max= 320, avg=196.60, stdev=62.03, samples=20 00:29:54.812 lat (msec) : 50=13.93%, 100=65.09%, 250=20.28%, 500=0.71% 00:29:54.812 cpu : usr=42.08%, sys=1.18%, ctx=1529, majf=0, minf=9 00:29:54.812 IO depths : 1=2.6%, 2=5.5%, 4=14.4%, 8=66.9%, 16=10.5%, 32=0.0%, >=64=0.0% 00:29:54.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:54.812 complete : 0=0.0%, 4=91.2%, 8=3.8%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:54.812 issued rwts: total=1982,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:54.812 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:54.812 filename1: (groupid=0, jobs=1): err= 0: pid=108139: Wed Nov 20 11:42:38 2024 00:29:54.812 read: IOPS=205, BW=821KiB/s (841kB/s)(8236KiB/10034msec) 00:29:54.812 slat (usec): min=4, max=8021, avg=19.29, stdev=249.57 00:29:54.812 clat (msec): min=19, max=241, avg=77.83, stdev=29.49 00:29:54.812 lat (msec): min=19, max=241, avg=77.85, stdev=29.49 00:29:54.812 clat percentiles (msec): 00:29:54.812 | 1.00th=[ 33], 5.00th=[ 41], 10.00th=[ 48], 20.00th=[ 59], 00:29:54.812 | 30.00th=[ 61], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 82], 00:29:54.812 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 118], 95.00th=[ 132], 00:29:54.812 | 99.00th=[ 180], 99.50th=[ 228], 99.90th=[ 243], 99.95th=[ 243], 00:29:54.812 | 99.99th=[ 243] 00:29:54.812 bw ( KiB/s): min= 512, max= 1120, per=4.47%, avg=817.20, stdev=160.03, samples=20 00:29:54.812 iops : min= 128, max= 280, avg=204.30, stdev=40.01, samples=20 00:29:54.812 lat (msec) : 20=0.78%, 50=15.25%, 100=67.70%, 250=16.27% 00:29:54.812 cpu : usr=32.24%, sys=0.92%, ctx=859, majf=0, minf=9 00:29:54.812 IO depths : 1=0.5%, 2=1.2%, 4=6.3%, 8=78.1%, 16=13.9%, 32=0.0%, >=64=0.0% 00:29:54.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:54.812 complete : 0=0.0%, 4=89.3%, 8=7.0%, 16=3.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:54.812 issued rwts: total=2059,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:54.812 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:54.812 filename1: (groupid=0, jobs=1): err= 0: pid=108140: Wed Nov 20 11:42:38 2024 00:29:54.812 read: IOPS=182, BW=728KiB/s (746kB/s)(7308KiB/10037msec) 00:29:54.812 slat (nsec): min=7418, max=52117, avg=12233.03, stdev=5340.96 00:29:54.812 clat (msec): min=21, max=327, avg=87.61, stdev=37.87 00:29:54.812 lat (msec): min=21, max=327, avg=87.62, stdev=37.87 00:29:54.812 clat percentiles (msec): 00:29:54.812 | 1.00th=[ 23], 5.00th=[ 47], 10.00th=[ 50], 20.00th=[ 61], 00:29:54.812 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 82], 60.00th=[ 92], 00:29:54.812 | 70.00th=[ 99], 80.00th=[ 109], 90.00th=[ 122], 95.00th=[ 144], 00:29:54.812 | 99.00th=[ 232], 99.50th=[ 275], 99.90th=[ 330], 99.95th=[ 330], 00:29:54.812 | 99.99th=[ 330] 00:29:54.812 bw ( KiB/s): min= 384, max= 1040, per=3.99%, avg=729.10, stdev=193.63, samples=20 00:29:54.812 iops : min= 96, max= 260, avg=182.20, stdev=48.37, samples=20 00:29:54.812 lat (msec) : 50=10.84%, 100=59.82%, 250=28.46%, 500=0.88% 00:29:54.812 cpu : usr=32.40%, sys=0.75%, ctx=897, majf=0, minf=9 00:29:54.812 IO depths : 1=1.9%, 2=4.1%, 4=13.1%, 8=69.5%, 16=11.4%, 32=0.0%, >=64=0.0% 00:29:54.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:54.812 complete : 0=0.0%, 4=90.5%, 8=4.7%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:54.812 issued rwts: total=1827,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:54.812 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:54.812 filename1: (groupid=0, jobs=1): err= 0: pid=108141: Wed Nov 20 11:42:38 2024 00:29:54.812 read: IOPS=210, BW=841KiB/s (861kB/s)(8452KiB/10052msec) 00:29:54.812 slat (usec): min=4, max=8023, avg=23.89, stdev=285.80 00:29:54.812 clat (msec): min=13, max=209, avg=75.90, stdev=30.26 00:29:54.812 lat (msec): min=13, max=209, avg=75.92, stdev=30.26 00:29:54.812 clat percentiles (msec): 00:29:54.812 | 1.00th=[ 22], 5.00th=[ 41], 10.00th=[ 46], 20.00th=[ 52], 00:29:54.812 | 30.00th=[ 57], 40.00th=[ 61], 50.00th=[ 71], 60.00th=[ 79], 00:29:54.812 | 70.00th=[ 85], 80.00th=[ 101], 90.00th=[ 121], 95.00th=[ 132], 00:29:54.812 | 99.00th=[ 163], 99.50th=[ 176], 99.90th=[ 211], 99.95th=[ 211], 00:29:54.812 | 99.99th=[ 211] 00:29:54.812 bw ( KiB/s): min= 512, max= 1200, per=4.59%, avg=838.50, stdev=215.71, samples=20 00:29:54.812 iops : min= 128, max= 300, avg=209.60, stdev=53.91, samples=20 00:29:54.812 lat (msec) : 20=0.76%, 50=17.70%, 100=61.71%, 250=19.83% 00:29:54.812 cpu : usr=38.50%, sys=1.13%, ctx=1114, majf=0, minf=9 00:29:54.812 IO depths : 1=0.7%, 2=1.4%, 4=7.6%, 8=77.6%, 16=12.8%, 32=0.0%, >=64=0.0% 00:29:54.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:54.812 complete : 0=0.0%, 4=89.5%, 8=5.9%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:54.812 issued rwts: total=2113,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:54.812 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:54.812 filename1: (groupid=0, jobs=1): err= 0: pid=108142: Wed Nov 20 11:42:38 2024 00:29:54.812 read: IOPS=166, BW=666KiB/s (682kB/s)(6668KiB/10011msec) 00:29:54.812 slat (usec): min=5, max=8038, avg=28.56, stdev=319.31 00:29:54.812 clat (msec): min=36, max=263, avg=95.79, stdev=30.48 00:29:54.812 lat (msec): min=36, max=263, avg=95.82, stdev=30.47 00:29:54.812 clat percentiles (msec): 00:29:54.812 | 1.00th=[ 42], 5.00th=[ 57], 10.00th=[ 64], 20.00th=[ 72], 00:29:54.812 | 30.00th=[ 79], 40.00th=[ 83], 50.00th=[ 94], 60.00th=[ 100], 00:29:54.812 | 70.00th=[ 108], 80.00th=[ 118], 90.00th=[ 128], 95.00th=[ 146], 00:29:54.812 | 99.00th=[ 197], 99.50th=[ 264], 99.90th=[ 264], 99.95th=[ 264], 00:29:54.812 | 99.99th=[ 264] 00:29:54.812 bw ( KiB/s): min= 384, max= 872, per=3.63%, avg=664.40, stdev=150.26, samples=20 00:29:54.812 iops : min= 96, max= 218, avg=166.10, stdev=37.57, samples=20 00:29:54.812 lat (msec) : 50=2.16%, 100=58.25%, 250=38.75%, 500=0.84% 00:29:54.812 cpu : usr=41.06%, sys=0.96%, ctx=1244, majf=0, minf=9 00:29:54.812 IO depths : 1=3.6%, 2=8.0%, 4=19.7%, 8=59.7%, 16=9.0%, 32=0.0%, >=64=0.0% 00:29:54.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:54.812 complete : 0=0.0%, 4=92.6%, 8=1.8%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:54.812 issued rwts: total=1667,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:54.812 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:54.812 filename1: (groupid=0, jobs=1): err= 0: pid=108143: Wed Nov 20 11:42:38 2024 00:29:54.812 read: IOPS=182, BW=730KiB/s (748kB/s)(7308KiB/10009msec) 00:29:54.812 slat (usec): min=7, max=3824, avg=13.29, stdev=89.33 00:29:54.812 clat (msec): min=6, max=189, avg=87.53, stdev=27.48 00:29:54.812 lat (msec): min=10, max=189, avg=87.54, stdev=27.48 00:29:54.812 clat percentiles (msec): 00:29:54.812 | 1.00th=[ 37], 5.00th=[ 48], 10.00th=[ 57], 20.00th=[ 63], 00:29:54.812 | 30.00th=[ 72], 40.00th=[ 81], 50.00th=[ 85], 60.00th=[ 93], 00:29:54.812 | 70.00th=[ 97], 80.00th=[ 110], 90.00th=[ 122], 95.00th=[ 132], 00:29:54.812 | 99.00th=[ 169], 99.50th=[ 188], 99.90th=[ 190], 99.95th=[ 190], 00:29:54.812 | 99.99th=[ 190] 00:29:54.812 bw ( KiB/s): min= 560, max= 944, per=3.97%, avg=726.05, stdev=112.91, samples=20 00:29:54.812 iops : min= 140, max= 236, avg=181.50, stdev=28.24, samples=20 00:29:54.812 lat (msec) : 10=0.05%, 20=0.05%, 50=7.61%, 100=65.08%, 250=27.20% 00:29:54.812 cpu : usr=32.17%, sys=0.86%, ctx=920, majf=0, minf=9 00:29:54.812 IO depths : 1=0.9%, 2=2.4%, 4=11.0%, 8=73.2%, 16=12.5%, 32=0.0%, >=64=0.0% 00:29:54.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:54.812 complete : 0=0.0%, 4=89.9%, 8=5.4%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:54.812 issued rwts: total=1827,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:54.812 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:54.812 filename1: (groupid=0, jobs=1): err= 0: pid=108144: Wed Nov 20 11:42:38 2024 00:29:54.812 read: IOPS=194, BW=778KiB/s (797kB/s)(7804KiB/10027msec) 00:29:54.812 slat (usec): min=4, max=8022, avg=27.54, stdev=330.67 00:29:54.812 clat (msec): min=29, max=263, avg=82.07, stdev=30.42 00:29:54.812 lat (msec): min=29, max=263, avg=82.10, stdev=30.42 00:29:54.812 clat percentiles (msec): 00:29:54.812 | 1.00th=[ 37], 5.00th=[ 48], 10.00th=[ 53], 20.00th=[ 61], 00:29:54.812 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 81], 00:29:54.812 | 70.00th=[ 88], 80.00th=[ 105], 90.00th=[ 121], 95.00th=[ 132], 00:29:54.812 | 99.00th=[ 194], 99.50th=[ 264], 99.90th=[ 264], 99.95th=[ 264], 00:29:54.812 | 99.99th=[ 264] 00:29:54.812 bw ( KiB/s): min= 384, max= 1120, per=4.24%, avg=774.00, stdev=169.43, samples=20 00:29:54.812 iops : min= 96, max= 280, avg=193.50, stdev=42.36, samples=20 00:29:54.812 lat (msec) : 50=8.10%, 100=71.40%, 250=19.68%, 500=0.82% 00:29:54.812 cpu : usr=38.36%, sys=1.10%, ctx=1318, majf=0, minf=9 00:29:54.812 IO depths : 1=1.7%, 2=3.8%, 4=12.1%, 8=70.6%, 16=11.7%, 32=0.0%, >=64=0.0% 00:29:54.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:54.812 complete : 0=0.0%, 4=90.6%, 8=4.8%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:54.812 issued rwts: total=1951,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:54.812 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:54.812 filename1: (groupid=0, jobs=1): err= 0: pid=108145: Wed Nov 20 11:42:38 2024 00:29:54.812 read: IOPS=169, BW=677KiB/s (693kB/s)(6772KiB/10001msec) 00:29:54.812 slat (usec): min=7, max=8045, avg=31.10, stdev=389.23 00:29:54.812 clat (msec): min=16, max=273, avg=94.29, stdev=33.54 00:29:54.812 lat (msec): min=16, max=273, avg=94.32, stdev=33.54 00:29:54.812 clat percentiles (msec): 00:29:54.812 | 1.00th=[ 45], 5.00th=[ 50], 10.00th=[ 61], 20.00th=[ 72], 00:29:54.812 | 30.00th=[ 75], 40.00th=[ 84], 50.00th=[ 86], 60.00th=[ 96], 00:29:54.812 | 70.00th=[ 107], 80.00th=[ 117], 90.00th=[ 126], 95.00th=[ 150], 00:29:54.812 | 99.00th=[ 215], 99.50th=[ 275], 99.90th=[ 275], 99.95th=[ 275], 00:29:54.812 | 99.99th=[ 275] 00:29:54.812 bw ( KiB/s): min= 384, max= 856, per=3.64%, avg=665.74, stdev=139.34, samples=19 00:29:54.812 iops : min= 96, max= 214, avg=166.42, stdev=34.85, samples=19 00:29:54.812 lat (msec) : 20=0.95%, 50=4.13%, 100=62.14%, 250=31.84%, 500=0.95% 00:29:54.812 cpu : usr=31.89%, sys=1.19%, ctx=932, majf=0, minf=9 00:29:54.812 IO depths : 1=2.4%, 2=5.7%, 4=16.2%, 8=65.1%, 16=10.6%, 32=0.0%, >=64=0.0% 00:29:54.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:54.812 complete : 0=0.0%, 4=91.6%, 8=3.1%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:54.812 issued rwts: total=1693,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:54.812 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:54.812 filename1: (groupid=0, jobs=1): err= 0: pid=108146: Wed Nov 20 11:42:38 2024 00:29:54.812 read: IOPS=215, BW=861KiB/s (881kB/s)(8652KiB/10054msec) 00:29:54.812 slat (usec): min=3, max=8024, avg=18.72, stdev=243.64 00:29:54.812 clat (msec): min=12, max=183, avg=74.22, stdev=26.44 00:29:54.812 lat (msec): min=12, max=183, avg=74.23, stdev=26.46 00:29:54.812 clat percentiles (msec): 00:29:54.812 | 1.00th=[ 24], 5.00th=[ 43], 10.00th=[ 48], 20.00th=[ 54], 00:29:54.812 | 30.00th=[ 60], 40.00th=[ 63], 50.00th=[ 72], 60.00th=[ 75], 00:29:54.812 | 70.00th=[ 84], 80.00th=[ 92], 90.00th=[ 108], 95.00th=[ 131], 00:29:54.812 | 99.00th=[ 157], 99.50th=[ 180], 99.90th=[ 184], 99.95th=[ 184], 00:29:54.812 | 99.99th=[ 184] 00:29:54.812 bw ( KiB/s): min= 560, max= 1120, per=4.70%, avg=858.65, stdev=159.30, samples=20 00:29:54.812 iops : min= 140, max= 280, avg=214.60, stdev=39.84, samples=20 00:29:54.812 lat (msec) : 20=0.74%, 50=16.27%, 100=69.44%, 250=13.55% 00:29:54.812 cpu : usr=38.24%, sys=1.08%, ctx=1094, majf=0, minf=9 00:29:54.812 IO depths : 1=0.5%, 2=1.2%, 4=7.6%, 8=77.4%, 16=13.3%, 32=0.0%, >=64=0.0% 00:29:54.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:54.812 complete : 0=0.0%, 4=89.5%, 8=6.2%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:54.812 issued rwts: total=2163,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:54.812 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:54.812 filename2: (groupid=0, jobs=1): err= 0: pid=108147: Wed Nov 20 11:42:38 2024 00:29:54.812 read: IOPS=188, BW=753KiB/s (771kB/s)(7536KiB/10007msec) 00:29:54.812 slat (usec): min=6, max=8024, avg=19.96, stdev=261.02 00:29:54.812 clat (msec): min=23, max=181, avg=84.85, stdev=27.88 00:29:54.812 lat (msec): min=23, max=181, avg=84.87, stdev=27.87 00:29:54.812 clat percentiles (msec): 00:29:54.812 | 1.00th=[ 39], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 61], 00:29:54.812 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 83], 60.00th=[ 86], 00:29:54.812 | 70.00th=[ 96], 80.00th=[ 108], 90.00th=[ 121], 95.00th=[ 132], 00:29:54.812 | 99.00th=[ 180], 99.50th=[ 180], 99.90th=[ 182], 99.95th=[ 182], 00:29:54.812 | 99.99th=[ 182] 00:29:54.812 bw ( KiB/s): min= 512, max= 1072, per=4.09%, avg=747.25, stdev=170.79, samples=20 00:29:54.812 iops : min= 128, max= 268, avg=186.80, stdev=42.72, samples=20 00:29:54.812 lat (msec) : 50=10.40%, 100=64.12%, 250=25.48% 00:29:54.812 cpu : usr=32.14%, sys=0.97%, ctx=884, majf=0, minf=9 00:29:54.812 IO depths : 1=0.9%, 2=2.1%, 4=9.2%, 8=75.3%, 16=12.5%, 32=0.0%, >=64=0.0% 00:29:54.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:54.812 complete : 0=0.0%, 4=89.8%, 8=5.6%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:54.812 issued rwts: total=1884,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:54.812 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:54.812 filename2: (groupid=0, jobs=1): err= 0: pid=108148: Wed Nov 20 11:42:38 2024 00:29:54.812 read: IOPS=189, BW=759KiB/s (777kB/s)(7620KiB/10037msec) 00:29:54.812 slat (usec): min=6, max=8034, avg=28.11, stdev=367.10 00:29:54.812 clat (msec): min=23, max=230, avg=84.08, stdev=27.70 00:29:54.812 lat (msec): min=23, max=230, avg=84.11, stdev=27.73 00:29:54.812 clat percentiles (msec): 00:29:54.812 | 1.00th=[ 34], 5.00th=[ 48], 10.00th=[ 52], 20.00th=[ 61], 00:29:54.812 | 30.00th=[ 72], 40.00th=[ 72], 50.00th=[ 83], 60.00th=[ 85], 00:29:54.812 | 70.00th=[ 96], 80.00th=[ 108], 90.00th=[ 121], 95.00th=[ 132], 00:29:54.812 | 99.00th=[ 169], 99.50th=[ 205], 99.90th=[ 232], 99.95th=[ 232], 00:29:54.812 | 99.99th=[ 232] 00:29:54.812 bw ( KiB/s): min= 480, max= 1010, per=4.13%, avg=755.90, stdev=125.06, samples=20 00:29:54.812 iops : min= 120, max= 252, avg=188.90, stdev=31.20, samples=20 00:29:54.812 lat (msec) : 50=9.97%, 100=65.35%, 250=24.67% 00:29:54.812 cpu : usr=32.06%, sys=1.11%, ctx=871, majf=0, minf=9 00:29:54.812 IO depths : 1=2.0%, 2=4.9%, 4=14.1%, 8=67.5%, 16=11.5%, 32=0.0%, >=64=0.0% 00:29:54.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:54.812 complete : 0=0.0%, 4=91.3%, 8=4.0%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:54.812 issued rwts: total=1905,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:54.812 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:54.812 filename2: (groupid=0, jobs=1): err= 0: pid=108149: Wed Nov 20 11:42:38 2024 00:29:54.812 read: IOPS=169, BW=678KiB/s (694kB/s)(6784KiB/10006msec) 00:29:54.812 slat (usec): min=4, max=1774, avg=13.29, stdev=43.08 00:29:54.812 clat (msec): min=7, max=279, avg=94.28, stdev=31.91 00:29:54.812 lat (msec): min=7, max=279, avg=94.29, stdev=31.91 00:29:54.812 clat percentiles (msec): 00:29:54.812 | 1.00th=[ 39], 5.00th=[ 64], 10.00th=[ 70], 20.00th=[ 72], 00:29:54.812 | 30.00th=[ 78], 40.00th=[ 81], 50.00th=[ 86], 60.00th=[ 93], 00:29:54.812 | 70.00th=[ 108], 80.00th=[ 113], 90.00th=[ 125], 95.00th=[ 144], 00:29:54.812 | 99.00th=[ 199], 99.50th=[ 279], 99.90th=[ 279], 99.95th=[ 279], 00:29:54.812 | 99.99th=[ 279] 00:29:54.812 bw ( KiB/s): min= 384, max= 896, per=3.64%, avg=666.95, stdev=133.02, samples=19 00:29:54.812 iops : min= 96, max= 224, avg=166.74, stdev=33.25, samples=19 00:29:54.812 lat (msec) : 10=0.12%, 20=0.83%, 50=0.65%, 100=64.45%, 250=33.02% 00:29:54.812 lat (msec) : 500=0.94% 00:29:54.812 cpu : usr=41.97%, sys=1.26%, ctx=1289, majf=0, minf=9 00:29:54.812 IO depths : 1=3.7%, 2=8.4%, 4=20.6%, 8=58.4%, 16=9.0%, 32=0.0%, >=64=0.0% 00:29:54.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:54.812 complete : 0=0.0%, 4=92.9%, 8=1.5%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:54.812 issued rwts: total=1696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:54.812 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:54.812 filename2: (groupid=0, jobs=1): err= 0: pid=108150: Wed Nov 20 11:42:38 2024 00:29:54.812 read: IOPS=221, BW=887KiB/s (908kB/s)(8916KiB/10050msec) 00:29:54.812 slat (usec): min=7, max=4028, avg=13.29, stdev=85.24 00:29:54.812 clat (msec): min=5, max=331, avg=71.88, stdev=35.56 00:29:54.812 lat (msec): min=5, max=331, avg=71.89, stdev=35.56 00:29:54.812 clat percentiles (msec): 00:29:54.812 | 1.00th=[ 7], 5.00th=[ 35], 10.00th=[ 44], 20.00th=[ 48], 00:29:54.812 | 30.00th=[ 55], 40.00th=[ 59], 50.00th=[ 65], 60.00th=[ 72], 00:29:54.812 | 70.00th=[ 81], 80.00th=[ 94], 90.00th=[ 114], 95.00th=[ 125], 00:29:54.812 | 99.00th=[ 157], 99.50th=[ 334], 99.90th=[ 334], 99.95th=[ 334], 00:29:54.812 | 99.99th=[ 334] 00:29:54.812 bw ( KiB/s): min= 384, max= 1536, per=4.84%, avg=885.20, stdev=268.81, samples=20 00:29:54.812 iops : min= 96, max= 384, avg=221.30, stdev=67.20, samples=20 00:29:54.812 lat (msec) : 10=2.87%, 20=0.63%, 50=20.95%, 100=59.35%, 250=15.48% 00:29:54.812 lat (msec) : 500=0.72% 00:29:54.812 cpu : usr=41.53%, sys=1.29%, ctx=1219, majf=0, minf=0 00:29:54.812 IO depths : 1=0.6%, 2=1.4%, 4=9.0%, 8=76.3%, 16=12.7%, 32=0.0%, >=64=0.0% 00:29:54.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:54.812 complete : 0=0.0%, 4=89.4%, 8=5.9%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:54.812 issued rwts: total=2229,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:54.812 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:54.812 filename2: (groupid=0, jobs=1): err= 0: pid=108151: Wed Nov 20 11:42:38 2024 00:29:54.812 read: IOPS=164, BW=659KiB/s (675kB/s)(6592KiB/10004msec) 00:29:54.812 slat (usec): min=3, max=8022, avg=21.55, stdev=241.73 00:29:54.812 clat (msec): min=5, max=264, avg=96.92, stdev=33.04 00:29:54.812 lat (msec): min=5, max=264, avg=96.95, stdev=33.04 00:29:54.812 clat percentiles (msec): 00:29:54.812 | 1.00th=[ 42], 5.00th=[ 61], 10.00th=[ 68], 20.00th=[ 72], 00:29:54.812 | 30.00th=[ 78], 40.00th=[ 84], 50.00th=[ 90], 60.00th=[ 99], 00:29:54.812 | 70.00th=[ 108], 80.00th=[ 121], 90.00th=[ 136], 95.00th=[ 153], 00:29:54.812 | 99.00th=[ 190], 99.50th=[ 266], 99.90th=[ 266], 99.95th=[ 266], 00:29:54.812 | 99.99th=[ 266] 00:29:54.812 bw ( KiB/s): min= 384, max= 768, per=3.50%, avg=640.00, stdev=120.68, samples=19 00:29:54.812 iops : min= 96, max= 192, avg=160.00, stdev=30.17, samples=19 00:29:54.812 lat (msec) : 10=0.97%, 50=1.94%, 100=58.92%, 250=37.20%, 500=0.97% 00:29:54.812 cpu : usr=39.88%, sys=1.22%, ctx=1092, majf=0, minf=9 00:29:54.812 IO depths : 1=4.3%, 2=9.3%, 4=21.2%, 8=57.0%, 16=8.2%, 32=0.0%, >=64=0.0% 00:29:54.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:54.812 complete : 0=0.0%, 4=93.0%, 8=1.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:54.812 issued rwts: total=1648,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:54.812 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:54.812 filename2: (groupid=0, jobs=1): err= 0: pid=108152: Wed Nov 20 11:42:38 2024 00:29:54.812 read: IOPS=173, BW=694KiB/s (711kB/s)(6948KiB/10007msec) 00:29:54.812 slat (usec): min=4, max=8020, avg=15.64, stdev=192.22 00:29:54.812 clat (msec): min=37, max=299, avg=92.03, stdev=32.78 00:29:54.812 lat (msec): min=37, max=299, avg=92.04, stdev=32.78 00:29:54.812 clat percentiles (msec): 00:29:54.812 | 1.00th=[ 45], 5.00th=[ 52], 10.00th=[ 61], 20.00th=[ 72], 00:29:54.812 | 30.00th=[ 73], 40.00th=[ 82], 50.00th=[ 85], 60.00th=[ 95], 00:29:54.812 | 70.00th=[ 102], 80.00th=[ 109], 90.00th=[ 130], 95.00th=[ 144], 00:29:54.812 | 99.00th=[ 215], 99.50th=[ 300], 99.90th=[ 300], 99.95th=[ 300], 00:29:54.812 | 99.99th=[ 300] 00:29:54.812 bw ( KiB/s): min= 384, max= 912, per=3.77%, avg=689.95, stdev=143.26, samples=19 00:29:54.812 iops : min= 96, max= 228, avg=172.47, stdev=35.82, samples=19 00:29:54.812 lat (msec) : 50=4.61%, 100=64.71%, 250=29.76%, 500=0.92% 00:29:54.812 cpu : usr=32.14%, sys=1.02%, ctx=860, majf=0, minf=9 00:29:54.812 IO depths : 1=1.8%, 2=4.5%, 4=13.5%, 8=69.0%, 16=11.2%, 32=0.0%, >=64=0.0% 00:29:54.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:54.813 complete : 0=0.0%, 4=91.1%, 8=3.7%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:54.813 issued rwts: total=1737,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:54.813 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:54.813 filename2: (groupid=0, jobs=1): err= 0: pid=108153: Wed Nov 20 11:42:38 2024 00:29:54.813 read: IOPS=172, BW=689KiB/s (705kB/s)(6896KiB/10010msec) 00:29:54.813 slat (usec): min=4, max=5023, avg=21.55, stdev=196.29 00:29:54.813 clat (msec): min=31, max=279, avg=92.70, stdev=32.79 00:29:54.813 lat (msec): min=31, max=279, avg=92.72, stdev=32.79 00:29:54.813 clat percentiles (msec): 00:29:54.813 | 1.00th=[ 46], 5.00th=[ 52], 10.00th=[ 62], 20.00th=[ 71], 00:29:54.813 | 30.00th=[ 74], 40.00th=[ 80], 50.00th=[ 86], 60.00th=[ 96], 00:29:54.813 | 70.00th=[ 104], 80.00th=[ 112], 90.00th=[ 124], 95.00th=[ 148], 00:29:54.813 | 99.00th=[ 209], 99.50th=[ 279], 99.90th=[ 279], 99.95th=[ 279], 00:29:54.813 | 99.99th=[ 279] 00:29:54.813 bw ( KiB/s): min= 384, max= 896, per=3.77%, avg=688.85, stdev=145.26, samples=20 00:29:54.813 iops : min= 96, max= 224, avg=172.20, stdev=36.32, samples=20 00:29:54.813 lat (msec) : 50=4.29%, 100=61.66%, 250=33.12%, 500=0.93% 00:29:54.813 cpu : usr=44.34%, sys=1.24%, ctx=1596, majf=0, minf=10 00:29:54.813 IO depths : 1=3.6%, 2=7.5%, 4=17.7%, 8=61.9%, 16=9.3%, 32=0.0%, >=64=0.0% 00:29:54.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:54.813 complete : 0=0.0%, 4=92.1%, 8=2.5%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:54.813 issued rwts: total=1724,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:54.813 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:54.813 filename2: (groupid=0, jobs=1): err= 0: pid=108154: Wed Nov 20 11:42:38 2024 00:29:54.813 read: IOPS=198, BW=794KiB/s (813kB/s)(7944KiB/10009msec) 00:29:54.813 slat (usec): min=3, max=4019, avg=17.86, stdev=155.75 00:29:54.813 clat (msec): min=9, max=275, avg=80.53, stdev=31.34 00:29:54.813 lat (msec): min=9, max=275, avg=80.55, stdev=31.34 00:29:54.813 clat percentiles (msec): 00:29:54.813 | 1.00th=[ 34], 5.00th=[ 46], 10.00th=[ 50], 20.00th=[ 56], 00:29:54.813 | 30.00th=[ 64], 40.00th=[ 71], 50.00th=[ 77], 60.00th=[ 83], 00:29:54.813 | 70.00th=[ 90], 80.00th=[ 101], 90.00th=[ 112], 95.00th=[ 127], 00:29:54.813 | 99.00th=[ 194], 99.50th=[ 266], 99.90th=[ 275], 99.95th=[ 275], 00:29:54.813 | 99.99th=[ 275] 00:29:54.813 bw ( KiB/s): min= 424, max= 1072, per=4.31%, avg=788.00, stdev=171.61, samples=20 00:29:54.813 iops : min= 106, max= 268, avg=197.00, stdev=42.90, samples=20 00:29:54.813 lat (msec) : 10=0.81%, 50=10.93%, 100=68.43%, 250=19.03%, 500=0.81% 00:29:54.813 cpu : usr=43.85%, sys=1.22%, ctx=1516, majf=0, minf=9 00:29:54.813 IO depths : 1=1.6%, 2=3.8%, 4=12.5%, 8=70.9%, 16=11.2%, 32=0.0%, >=64=0.0% 00:29:54.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:54.813 complete : 0=0.0%, 4=90.6%, 8=4.1%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:54.813 issued rwts: total=1986,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:54.813 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:54.813 00:29:54.813 Run status group 0 (all jobs): 00:29:54.813 READ: bw=17.8MiB/s (18.7MB/s), 659KiB/s-943KiB/s (675kB/s-966kB/s), io=180MiB (188MB), run=10001-10060msec 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:54.813 bdev_null0 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:54.813 [2024-11-20 11:42:38.585117] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:54.813 bdev_null1 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:54.813 { 00:29:54.813 "params": { 00:29:54.813 "name": "Nvme$subsystem", 00:29:54.813 "trtype": "$TEST_TRANSPORT", 00:29:54.813 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:54.813 "adrfam": "ipv4", 00:29:54.813 "trsvcid": "$NVMF_PORT", 00:29:54.813 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:54.813 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:54.813 "hdgst": ${hdgst:-false}, 00:29:54.813 "ddgst": ${ddgst:-false} 00:29:54.813 }, 00:29:54.813 "method": "bdev_nvme_attach_controller" 00:29:54.813 } 00:29:54.813 EOF 00:29:54.813 )") 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:54.813 { 00:29:54.813 "params": { 00:29:54.813 "name": "Nvme$subsystem", 00:29:54.813 "trtype": "$TEST_TRANSPORT", 00:29:54.813 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:54.813 "adrfam": "ipv4", 00:29:54.813 "trsvcid": "$NVMF_PORT", 00:29:54.813 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:54.813 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:54.813 "hdgst": ${hdgst:-false}, 00:29:54.813 "ddgst": ${ddgst:-false} 00:29:54.813 }, 00:29:54.813 "method": "bdev_nvme_attach_controller" 00:29:54.813 } 00:29:54.813 EOF 00:29:54.813 )") 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:54.813 "params": { 00:29:54.813 "name": "Nvme0", 00:29:54.813 "trtype": "tcp", 00:29:54.813 "traddr": "10.0.0.3", 00:29:54.813 "adrfam": "ipv4", 00:29:54.813 "trsvcid": "4420", 00:29:54.813 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:54.813 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:54.813 "hdgst": false, 00:29:54.813 "ddgst": false 00:29:54.813 }, 00:29:54.813 "method": "bdev_nvme_attach_controller" 00:29:54.813 },{ 00:29:54.813 "params": { 00:29:54.813 "name": "Nvme1", 00:29:54.813 "trtype": "tcp", 00:29:54.813 "traddr": "10.0.0.3", 00:29:54.813 "adrfam": "ipv4", 00:29:54.813 "trsvcid": "4420", 00:29:54.813 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:54.813 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:54.813 "hdgst": false, 00:29:54.813 "ddgst": false 00:29:54.813 }, 00:29:54.813 "method": "bdev_nvme_attach_controller" 00:29:54.813 }' 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:29:54.813 11:42:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:54.813 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:29:54.813 ... 00:29:54.813 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:29:54.813 ... 00:29:54.813 fio-3.35 00:29:54.813 Starting 4 threads 00:29:58.999 00:29:58.999 filename0: (groupid=0, jobs=1): err= 0: pid=108281: Wed Nov 20 11:42:44 2024 00:29:58.999 read: IOPS=1877, BW=14.7MiB/s (15.4MB/s)(73.4MiB/5003msec) 00:29:58.999 slat (nsec): min=4582, max=42130, avg=16526.05, stdev=3826.65 00:29:58.999 clat (usec): min=3017, max=6533, avg=4180.12, stdev=313.69 00:29:58.999 lat (usec): min=3032, max=6557, avg=4196.64, stdev=313.82 00:29:58.999 clat percentiles (usec): 00:29:58.999 | 1.00th=[ 3982], 5.00th=[ 3982], 10.00th=[ 4015], 20.00th=[ 4015], 00:29:58.999 | 30.00th=[ 4047], 40.00th=[ 4080], 50.00th=[ 4080], 60.00th=[ 4113], 00:29:58.999 | 70.00th=[ 4113], 80.00th=[ 4178], 90.00th=[ 4555], 95.00th=[ 5014], 00:29:58.999 | 99.00th=[ 5538], 99.50th=[ 5735], 99.90th=[ 6063], 99.95th=[ 6063], 00:29:58.999 | 99.99th=[ 6521] 00:29:58.999 bw ( KiB/s): min=13952, max=15488, per=24.98%, avg=15017.40, stdev=614.48, samples=10 00:29:58.999 iops : min= 1744, max= 1936, avg=1877.10, stdev=76.76, samples=10 00:29:58.999 lat (msec) : 4=7.12%, 10=92.88% 00:29:58.999 cpu : usr=93.90%, sys=4.86%, ctx=5, majf=0, minf=0 00:29:58.999 IO depths : 1=12.4%, 2=25.0%, 4=50.0%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:58.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:58.999 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:58.999 issued rwts: total=9392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:58.999 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:58.999 filename0: (groupid=0, jobs=1): err= 0: pid=108282: Wed Nov 20 11:42:44 2024 00:29:58.999 read: IOPS=1882, BW=14.7MiB/s (15.4MB/s)(73.6MiB/5003msec) 00:29:58.999 slat (nsec): min=4932, max=40425, avg=9494.69, stdev=3210.62 00:29:58.999 clat (usec): min=1159, max=6008, avg=4199.80, stdev=349.89 00:29:58.999 lat (usec): min=1169, max=6026, avg=4209.30, stdev=349.90 00:29:58.999 clat percentiles (usec): 00:29:58.999 | 1.00th=[ 4015], 5.00th=[ 4047], 10.00th=[ 4047], 20.00th=[ 4080], 00:29:58.999 | 30.00th=[ 4080], 40.00th=[ 4080], 50.00th=[ 4113], 60.00th=[ 4113], 00:29:58.999 | 70.00th=[ 4146], 80.00th=[ 4228], 90.00th=[ 4555], 95.00th=[ 5014], 00:29:58.999 | 99.00th=[ 5407], 99.50th=[ 5866], 99.90th=[ 5997], 99.95th=[ 5997], 00:29:58.999 | 99.99th=[ 5997] 00:29:58.999 bw ( KiB/s): min=13952, max=15744, per=24.98%, avg=15018.67, stdev=643.19, samples=9 00:29:58.999 iops : min= 1744, max= 1968, avg=1877.33, stdev=80.40, samples=9 00:29:58.999 lat (msec) : 2=0.34%, 4=0.57%, 10=99.09% 00:29:58.999 cpu : usr=93.72%, sys=5.02%, ctx=10, majf=0, minf=0 00:29:58.999 IO depths : 1=12.3%, 2=25.0%, 4=50.0%, 8=12.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:58.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:58.999 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:58.999 issued rwts: total=9416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:58.999 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:58.999 filename1: (groupid=0, jobs=1): err= 0: pid=108283: Wed Nov 20 11:42:44 2024 00:29:58.999 read: IOPS=1877, BW=14.7MiB/s (15.4MB/s)(73.4MiB/5003msec) 00:29:58.999 slat (nsec): min=3763, max=50249, avg=16153.78, stdev=4452.47 00:29:58.999 clat (usec): min=2837, max=8538, avg=4179.15, stdev=319.41 00:29:58.999 lat (usec): min=2850, max=8547, avg=4195.30, stdev=319.68 00:29:58.999 clat percentiles (usec): 00:29:58.999 | 1.00th=[ 3982], 5.00th=[ 3982], 10.00th=[ 4015], 20.00th=[ 4015], 00:29:58.999 | 30.00th=[ 4047], 40.00th=[ 4080], 50.00th=[ 4080], 60.00th=[ 4113], 00:29:58.999 | 70.00th=[ 4113], 80.00th=[ 4178], 90.00th=[ 4490], 95.00th=[ 5014], 00:29:58.999 | 99.00th=[ 5538], 99.50th=[ 5800], 99.90th=[ 6063], 99.95th=[ 6718], 00:29:58.999 | 99.99th=[ 8586] 00:29:58.999 bw ( KiB/s): min=13952, max=15488, per=24.98%, avg=15017.40, stdev=614.48, samples=10 00:29:59.000 iops : min= 1744, max= 1936, avg=1877.10, stdev=76.76, samples=10 00:29:59.000 lat (msec) : 4=6.45%, 10=93.55% 00:29:59.000 cpu : usr=93.92%, sys=4.80%, ctx=7, majf=0, minf=0 00:29:59.000 IO depths : 1=12.3%, 2=25.0%, 4=50.0%, 8=12.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:59.000 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:59.000 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:59.000 issued rwts: total=9392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:59.000 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:59.000 filename1: (groupid=0, jobs=1): err= 0: pid=108284: Wed Nov 20 11:42:44 2024 00:29:59.000 read: IOPS=1877, BW=14.7MiB/s (15.4MB/s)(73.4MiB/5003msec) 00:29:59.000 slat (nsec): min=3752, max=47263, avg=14877.54, stdev=4984.59 00:29:59.000 clat (usec): min=2165, max=7286, avg=4192.00, stdev=317.59 00:29:59.000 lat (usec): min=2177, max=7294, avg=4206.88, stdev=317.33 00:29:59.000 clat percentiles (usec): 00:29:59.000 | 1.00th=[ 3982], 5.00th=[ 4015], 10.00th=[ 4015], 20.00th=[ 4047], 00:29:59.000 | 30.00th=[ 4047], 40.00th=[ 4080], 50.00th=[ 4080], 60.00th=[ 4113], 00:29:59.000 | 70.00th=[ 4146], 80.00th=[ 4178], 90.00th=[ 4555], 95.00th=[ 5014], 00:29:59.000 | 99.00th=[ 5538], 99.50th=[ 5800], 99.90th=[ 5997], 99.95th=[ 6063], 00:29:59.000 | 99.99th=[ 7308] 00:29:59.000 bw ( KiB/s): min=13952, max=15488, per=24.98%, avg=15014.40, stdev=612.53, samples=10 00:29:59.000 iops : min= 1744, max= 1936, avg=1876.80, stdev=76.57, samples=10 00:29:59.000 lat (msec) : 4=4.23%, 10=95.77% 00:29:59.000 cpu : usr=93.80%, sys=4.78%, ctx=12, majf=0, minf=0 00:29:59.000 IO depths : 1=12.4%, 2=24.9%, 4=50.1%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:59.000 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:59.000 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:59.000 issued rwts: total=9392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:59.000 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:59.000 00:29:59.000 Run status group 0 (all jobs): 00:29:59.000 READ: bw=58.7MiB/s (61.6MB/s), 14.7MiB/s-14.7MiB/s (15.4MB/s-15.4MB/s), io=294MiB (308MB), run=5003-5003msec 00:29:59.259 11:42:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:29:59.259 11:42:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:29:59.259 11:42:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:59.259 11:42:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:59.259 11:42:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:29:59.259 11:42:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:59.259 11:42:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.260 11:42:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:59.260 11:42:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.260 11:42:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:59.260 11:42:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.260 11:42:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:59.260 11:42:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.260 11:42:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:59.260 11:42:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:29:59.260 11:42:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:29:59.260 11:42:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:59.260 11:42:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.260 11:42:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:59.260 11:42:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.260 11:42:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:29:59.260 11:42:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.260 11:42:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:59.260 11:42:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.260 00:29:59.260 real 0m23.529s 00:29:59.260 user 2m6.003s 00:29:59.260 sys 0m5.234s 00:29:59.260 ************************************ 00:29:59.260 END TEST fio_dif_rand_params 00:29:59.260 ************************************ 00:29:59.260 11:42:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:59.260 11:42:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:59.260 11:42:44 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:29:59.260 11:42:44 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:59.260 11:42:44 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:59.260 11:42:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:59.260 ************************************ 00:29:59.260 START TEST fio_dif_digest 00:29:59.260 ************************************ 00:29:59.260 11:42:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:29:59.260 11:42:44 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:29:59.260 11:42:44 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:29:59.260 11:42:44 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:29:59.260 11:42:44 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:29:59.260 11:42:44 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:29:59.260 11:42:44 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:29:59.260 11:42:44 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:29:59.260 11:42:44 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:29:59.260 11:42:44 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:29:59.260 11:42:44 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:29:59.260 11:42:44 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:29:59.260 11:42:44 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:29:59.260 11:42:44 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:29:59.260 11:42:44 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:29:59.260 11:42:44 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:29:59.260 11:42:44 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:29:59.260 11:42:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.260 11:42:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:59.260 bdev_null0 00:29:59.260 11:42:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.260 11:42:44 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:59.260 11:42:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.260 11:42:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:59.260 11:42:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.260 11:42:44 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:59.260 11:42:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.260 11:42:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:59.260 11:42:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.260 11:42:44 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:29:59.260 11:42:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.260 11:42:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:59.260 [2024-11-20 11:42:44.812584] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:29:59.260 11:42:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.260 11:42:44 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:29:59.260 11:42:44 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:29:59.260 11:42:44 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:29:59.260 11:42:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:29:59.260 11:42:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:29:59.260 11:42:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:59.260 11:42:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:59.260 { 00:29:59.260 "params": { 00:29:59.260 "name": "Nvme$subsystem", 00:29:59.260 "trtype": "$TEST_TRANSPORT", 00:29:59.260 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:59.260 "adrfam": "ipv4", 00:29:59.260 "trsvcid": "$NVMF_PORT", 00:29:59.260 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:59.260 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:59.260 "hdgst": ${hdgst:-false}, 00:29:59.260 "ddgst": ${ddgst:-false} 00:29:59.260 }, 00:29:59.260 "method": "bdev_nvme_attach_controller" 00:29:59.260 } 00:29:59.260 EOF 00:29:59.260 )") 00:29:59.260 11:42:44 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:59.260 11:42:44 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:29:59.260 11:42:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:59.260 11:42:44 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:29:59.260 11:42:44 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:29:59.260 11:42:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:29:59.260 11:42:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:59.260 11:42:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:29:59.260 11:42:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:29:59.260 11:42:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:59.260 11:42:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:29:59.260 11:42:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:29:59.260 11:42:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:29:59.260 11:42:44 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:29:59.260 11:42:44 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:29:59.260 11:42:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:29:59.260 11:42:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:29:59.260 11:42:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:29:59.260 11:42:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:59.260 11:42:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:29:59.260 11:42:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:59.260 "params": { 00:29:59.260 "name": "Nvme0", 00:29:59.260 "trtype": "tcp", 00:29:59.260 "traddr": "10.0.0.3", 00:29:59.260 "adrfam": "ipv4", 00:29:59.260 "trsvcid": "4420", 00:29:59.260 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:59.260 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:59.260 "hdgst": true, 00:29:59.260 "ddgst": true 00:29:59.260 }, 00:29:59.260 "method": "bdev_nvme_attach_controller" 00:29:59.260 }' 00:29:59.520 11:42:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:29:59.520 11:42:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:29:59.520 11:42:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:29:59.520 11:42:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:59.520 11:42:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:29:59.520 11:42:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:29:59.520 11:42:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:29:59.520 11:42:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:29:59.520 11:42:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:29:59.520 11:42:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:59.520 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:29:59.520 ... 00:29:59.520 fio-3.35 00:29:59.520 Starting 3 threads 00:30:11.743 00:30:11.743 filename0: (groupid=0, jobs=1): err= 0: pid=108390: Wed Nov 20 11:42:55 2024 00:30:11.743 read: IOPS=203, BW=25.5MiB/s (26.7MB/s)(255MiB/10003msec) 00:30:11.743 slat (nsec): min=8111, max=51504, avg=14518.03, stdev=4069.63 00:30:11.743 clat (usec): min=7235, max=18884, avg=14688.90, stdev=1466.57 00:30:11.743 lat (usec): min=7247, max=18893, avg=14703.42, stdev=1466.63 00:30:11.743 clat percentiles (usec): 00:30:11.743 | 1.00th=[ 8979], 5.00th=[12649], 10.00th=[13304], 20.00th=[13829], 00:30:11.743 | 30.00th=[14222], 40.00th=[14484], 50.00th=[14746], 60.00th=[15008], 00:30:11.743 | 70.00th=[15401], 80.00th=[15664], 90.00th=[16319], 95.00th=[16712], 00:30:11.743 | 99.00th=[17695], 99.50th=[18220], 99.90th=[18744], 99.95th=[18744], 00:30:11.743 | 99.99th=[19006] 00:30:11.743 bw ( KiB/s): min=24832, max=28672, per=33.11%, avg=26031.16, stdev=827.57, samples=19 00:30:11.743 iops : min= 194, max= 224, avg=203.37, stdev= 6.47, samples=19 00:30:11.744 lat (msec) : 10=2.40%, 20=97.60% 00:30:11.744 cpu : usr=92.72%, sys=5.82%, ctx=32, majf=0, minf=0 00:30:11.744 IO depths : 1=1.3%, 2=98.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:11.744 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.744 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.744 issued rwts: total=2040,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.744 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:11.744 filename0: (groupid=0, jobs=1): err= 0: pid=108391: Wed Nov 20 11:42:55 2024 00:30:11.744 read: IOPS=172, BW=21.6MiB/s (22.6MB/s)(217MiB/10043msec) 00:30:11.744 slat (nsec): min=8188, max=75727, avg=14857.62, stdev=5991.48 00:30:11.744 clat (usec): min=9563, max=56835, avg=17320.93, stdev=1873.27 00:30:11.744 lat (usec): min=9573, max=56850, avg=17335.78, stdev=1873.74 00:30:11.744 clat percentiles (usec): 00:30:11.744 | 1.00th=[10814], 5.00th=[15926], 10.00th=[16319], 20.00th=[16581], 00:30:11.744 | 30.00th=[16909], 40.00th=[17171], 50.00th=[17433], 60.00th=[17433], 00:30:11.744 | 70.00th=[17695], 80.00th=[17957], 90.00th=[18482], 95.00th=[19006], 00:30:11.744 | 99.00th=[22676], 99.50th=[24249], 99.90th=[46400], 99.95th=[56886], 00:30:11.744 | 99.99th=[56886] 00:30:11.744 bw ( KiB/s): min=20224, max=23808, per=28.17%, avg=22148.65, stdev=817.72, samples=20 00:30:11.744 iops : min= 158, max= 186, avg=173.00, stdev= 6.44, samples=20 00:30:11.744 lat (msec) : 10=0.52%, 20=97.87%, 50=1.56%, 100=0.06% 00:30:11.744 cpu : usr=93.42%, sys=5.27%, ctx=24, majf=0, minf=0 00:30:11.744 IO depths : 1=4.5%, 2=95.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:11.744 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.744 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.744 issued rwts: total=1735,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.744 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:11.744 filename0: (groupid=0, jobs=1): err= 0: pid=108392: Wed Nov 20 11:42:55 2024 00:30:11.744 read: IOPS=239, BW=29.9MiB/s (31.3MB/s)(299MiB/10009msec) 00:30:11.744 slat (nsec): min=7696, max=52957, avg=14960.11, stdev=3074.08 00:30:11.744 clat (usec): min=8513, max=56746, avg=12528.55, stdev=3166.15 00:30:11.744 lat (usec): min=8521, max=56764, avg=12543.51, stdev=3166.28 00:30:11.744 clat percentiles (usec): 00:30:11.744 | 1.00th=[10028], 5.00th=[10814], 10.00th=[11207], 20.00th=[11600], 00:30:11.744 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12256], 60.00th=[12518], 00:30:11.744 | 70.00th=[12780], 80.00th=[13042], 90.00th=[13435], 95.00th=[13829], 00:30:11.744 | 99.00th=[17171], 99.50th=[53740], 99.90th=[55837], 99.95th=[56361], 00:30:11.744 | 99.99th=[56886] 00:30:11.744 bw ( KiB/s): min=25600, max=32256, per=39.09%, avg=30726.84, stdev=1552.83, samples=19 00:30:11.744 iops : min= 200, max= 252, avg=240.05, stdev=12.13, samples=19 00:30:11.744 lat (msec) : 10=0.54%, 20=98.96%, 100=0.50% 00:30:11.744 cpu : usr=92.28%, sys=6.16%, ctx=13, majf=0, minf=0 00:30:11.744 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:11.744 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.744 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.744 issued rwts: total=2393,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.744 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:11.744 00:30:11.744 Run status group 0 (all jobs): 00:30:11.744 READ: bw=76.8MiB/s (80.5MB/s), 21.6MiB/s-29.9MiB/s (22.6MB/s-31.3MB/s), io=771MiB (808MB), run=10003-10043msec 00:30:11.744 11:42:55 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:30:11.744 11:42:55 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:30:11.744 11:42:55 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:30:11.744 11:42:55 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:11.744 11:42:55 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:30:11.744 11:42:55 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:11.744 11:42:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.744 11:42:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:11.744 11:42:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.744 11:42:55 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:11.744 11:42:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.744 11:42:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:11.744 ************************************ 00:30:11.744 END TEST fio_dif_digest 00:30:11.744 ************************************ 00:30:11.744 11:42:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.744 00:30:11.744 real 0m11.019s 00:30:11.744 user 0m28.566s 00:30:11.744 sys 0m1.965s 00:30:11.744 11:42:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:11.744 11:42:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:11.744 11:42:55 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:30:11.744 11:42:55 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:30:11.744 11:42:55 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:11.744 11:42:55 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:30:11.744 11:42:55 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:11.744 11:42:55 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:30:11.744 11:42:55 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:11.744 11:42:55 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:11.744 rmmod nvme_tcp 00:30:11.744 rmmod nvme_fabrics 00:30:11.744 rmmod nvme_keyring 00:30:11.744 11:42:55 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:11.744 11:42:55 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:30:11.744 11:42:55 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:30:11.744 11:42:55 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 107656 ']' 00:30:11.744 11:42:55 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 107656 00:30:11.744 11:42:55 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 107656 ']' 00:30:11.744 11:42:55 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 107656 00:30:11.744 11:42:55 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:30:11.744 11:42:55 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:11.744 11:42:55 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 107656 00:30:11.744 killing process with pid 107656 00:30:11.744 11:42:55 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:11.744 11:42:55 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:11.744 11:42:55 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 107656' 00:30:11.744 11:42:55 nvmf_dif -- common/autotest_common.sh@973 -- # kill 107656 00:30:11.744 11:42:55 nvmf_dif -- common/autotest_common.sh@978 -- # wait 107656 00:30:11.744 11:42:56 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:30:11.744 11:42:56 nvmf_dif -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:11.744 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:11.744 Waiting for block devices as requested 00:30:11.744 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:30:11.744 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:30:11.744 11:42:56 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:11.744 11:42:56 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:11.744 11:42:56 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:30:11.744 11:42:56 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:30:11.744 11:42:56 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:30:11.744 11:42:56 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:11.744 11:42:56 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:11.744 11:42:56 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:30:11.744 11:42:56 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:30:11.744 11:42:56 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:30:11.744 11:42:56 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:30:11.744 11:42:56 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:30:11.744 11:42:56 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:30:11.744 11:42:56 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:30:11.744 11:42:56 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:30:11.744 11:42:56 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:30:11.744 11:42:56 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:30:11.744 11:42:56 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:30:11.744 11:42:56 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:30:11.744 11:42:56 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:11.744 11:42:56 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:11.744 11:42:56 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:30:11.744 11:42:56 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:11.744 11:42:56 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:11.744 11:42:56 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:11.744 11:42:56 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:30:11.744 00:30:11.744 real 0m59.370s 00:30:11.744 user 3m51.343s 00:30:11.744 sys 0m14.410s 00:30:11.744 11:42:56 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:11.744 ************************************ 00:30:11.744 END TEST nvmf_dif 00:30:11.744 ************************************ 00:30:11.744 11:42:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:11.744 11:42:56 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:30:11.745 11:42:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:11.745 11:42:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:11.745 11:42:56 -- common/autotest_common.sh@10 -- # set +x 00:30:11.745 ************************************ 00:30:11.745 START TEST nvmf_abort_qd_sizes 00:30:11.745 ************************************ 00:30:11.745 11:42:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:30:11.745 * Looking for test storage... 00:30:11.745 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:30:11.745 11:42:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:11.745 11:42:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:11.745 11:42:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:11.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:11.745 --rc genhtml_branch_coverage=1 00:30:11.745 --rc genhtml_function_coverage=1 00:30:11.745 --rc genhtml_legend=1 00:30:11.745 --rc geninfo_all_blocks=1 00:30:11.745 --rc geninfo_unexecuted_blocks=1 00:30:11.745 00:30:11.745 ' 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:11.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:11.745 --rc genhtml_branch_coverage=1 00:30:11.745 --rc genhtml_function_coverage=1 00:30:11.745 --rc genhtml_legend=1 00:30:11.745 --rc geninfo_all_blocks=1 00:30:11.745 --rc geninfo_unexecuted_blocks=1 00:30:11.745 00:30:11.745 ' 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:11.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:11.745 --rc genhtml_branch_coverage=1 00:30:11.745 --rc genhtml_function_coverage=1 00:30:11.745 --rc genhtml_legend=1 00:30:11.745 --rc geninfo_all_blocks=1 00:30:11.745 --rc geninfo_unexecuted_blocks=1 00:30:11.745 00:30:11.745 ' 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:11.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:11.745 --rc genhtml_branch_coverage=1 00:30:11.745 --rc genhtml_function_coverage=1 00:30:11.745 --rc genhtml_legend=1 00:30:11.745 --rc geninfo_all_blocks=1 00:30:11.745 --rc geninfo_unexecuted_blocks=1 00:30:11.745 00:30:11.745 ' 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=806d442c-08ba-4719-b76d-33169283459a 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:11.745 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@460 -- # nvmf_veth_init 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:30:11.745 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:30:11.746 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:30:11.746 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:11.746 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:30:11.746 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:30:11.746 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:30:11.746 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:30:11.746 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:30:11.746 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:30:11.746 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:11.746 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:30:11.746 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:30:11.746 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:30:11.746 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:30:11.746 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:30:11.746 Cannot find device "nvmf_init_br" 00:30:11.746 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:30:11.746 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:30:11.746 Cannot find device "nvmf_init_br2" 00:30:11.746 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:30:11.746 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:30:11.746 Cannot find device "nvmf_tgt_br" 00:30:11.746 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:30:11.746 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:30:11.746 Cannot find device "nvmf_tgt_br2" 00:30:11.746 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:30:11.746 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:30:11.746 Cannot find device "nvmf_init_br" 00:30:11.746 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:30:11.746 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:30:11.746 Cannot find device "nvmf_init_br2" 00:30:11.746 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:30:11.746 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:30:11.746 Cannot find device "nvmf_tgt_br" 00:30:11.746 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:30:11.746 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:30:11.746 Cannot find device "nvmf_tgt_br2" 00:30:11.746 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:30:11.746 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:30:11.746 Cannot find device "nvmf_br" 00:30:11.746 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:30:11.746 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:30:11.746 Cannot find device "nvmf_init_if" 00:30:11.746 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:30:11.746 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:30:11.746 Cannot find device "nvmf_init_if2" 00:30:11.746 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:30:11.746 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:11.746 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:11.746 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:30:11.746 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:11.746 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:11.746 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:30:11.746 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:30:11.746 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:30:11.746 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:30:11.746 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:30:11.746 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:30:11.746 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:30:11.746 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:30:11.746 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:30:11.746 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:30:11.746 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:30:11.746 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:30:11.746 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:30:11.746 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:30:11.746 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:30:11.746 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:30:11.746 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:30:11.746 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:30:11.746 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:30:12.004 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:30:12.004 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:30:12.005 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:30:12.005 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:30:12.005 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:30:12.005 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:30:12.005 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:30:12.005 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:30:12.005 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:30:12.005 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:30:12.005 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:30:12.005 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:30:12.005 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:30:12.005 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:30:12.005 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:30:12.005 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:30:12.005 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.107 ms 00:30:12.005 00:30:12.005 --- 10.0.0.3 ping statistics --- 00:30:12.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:12.005 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:30:12.005 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:30:12.005 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:30:12.005 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.076 ms 00:30:12.005 00:30:12.005 --- 10.0.0.4 ping statistics --- 00:30:12.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:12.005 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:30:12.005 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:30:12.005 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:12.005 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:30:12.005 00:30:12.005 --- 10.0.0.1 ping statistics --- 00:30:12.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:12.005 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:30:12.005 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:30:12.005 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:12.005 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:30:12.005 00:30:12.005 --- 10.0.0.2 ping statistics --- 00:30:12.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:12.005 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:30:12.005 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:12.005 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@461 -- # return 0 00:30:12.005 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:30:12.005 11:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:30:12.571 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:12.571 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:30:12.830 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:30:12.830 11:42:58 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:12.830 11:42:58 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:12.830 11:42:58 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:12.830 11:42:58 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:12.830 11:42:58 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:12.830 11:42:58 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:12.830 11:42:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:30:12.830 11:42:58 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:12.830 11:42:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:12.830 11:42:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:12.830 11:42:58 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=109025 00:30:12.830 11:42:58 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:30:12.830 11:42:58 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 109025 00:30:12.830 11:42:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 109025 ']' 00:30:12.830 11:42:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:12.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:12.830 11:42:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:12.830 11:42:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:12.830 11:42:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:12.830 11:42:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:12.830 [2024-11-20 11:42:58.376528] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:30:12.830 [2024-11-20 11:42:58.376671] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:13.105 [2024-11-20 11:42:58.529979] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:13.105 [2024-11-20 11:42:58.566692] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:13.105 [2024-11-20 11:42:58.566739] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:13.105 [2024-11-20 11:42:58.566750] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:13.105 [2024-11-20 11:42:58.566758] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:13.105 [2024-11-20 11:42:58.566766] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:13.105 [2024-11-20 11:42:58.567456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:13.105 [2024-11-20 11:42:58.568377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:13.105 [2024-11-20 11:42:58.568526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:13.105 [2024-11-20 11:42:58.568531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:13.105 11:42:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:13.105 11:42:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:30:13.105 11:42:58 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:13.105 11:42:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:13.105 11:42:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:13.105 11:42:58 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:13.105 11:42:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:30:13.105 11:42:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:30:13.105 11:42:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:30:13.105 11:42:58 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:30:13.105 11:42:58 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:30:13.105 11:42:58 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:30:13.105 11:42:58 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:30:13.364 11:42:58 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:30:13.364 11:42:58 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:30:13.364 11:42:58 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:30:13.364 11:42:58 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:30:13.364 11:42:58 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:30:13.364 11:42:58 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:30:13.364 11:42:58 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:30:13.364 11:42:58 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:30:13.364 11:42:58 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:30:13.364 11:42:58 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:30:13.364 11:42:58 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:30:13.364 11:42:58 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:30:13.364 11:42:58 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:30:13.364 11:42:58 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:30:13.364 11:42:58 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:30:13.364 11:42:58 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:30:13.364 11:42:58 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:30:13.364 11:42:58 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:30:13.364 11:42:58 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:30:13.364 11:42:58 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:30:13.364 11:42:58 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:30:13.364 11:42:58 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:30:13.364 11:42:58 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:30:13.364 11:42:58 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:30:13.364 11:42:58 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:30:13.364 11:42:58 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:30:13.364 11:42:58 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:30:13.364 11:42:58 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:30:13.364 11:42:58 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:30:13.364 11:42:58 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:30:13.364 11:42:58 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:30:13.364 11:42:58 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:30:13.364 11:42:58 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:30:13.364 11:42:58 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:30:13.364 11:42:58 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:30:13.364 11:42:58 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:30:13.364 11:42:58 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:30:13.364 11:42:58 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:30:13.364 11:42:58 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:30:13.364 11:42:58 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:30:13.364 11:42:58 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:30:13.364 11:42:58 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:30:13.364 11:42:58 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:30:13.364 11:42:58 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:30:13.364 11:42:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:30:13.364 11:42:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:30:13.364 11:42:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:30:13.364 11:42:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:13.364 11:42:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:13.364 11:42:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:13.364 ************************************ 00:30:13.364 START TEST spdk_target_abort 00:30:13.364 ************************************ 00:30:13.364 11:42:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:30:13.364 11:42:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:30:13.364 11:42:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:30:13.364 11:42:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.364 11:42:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:13.364 spdk_targetn1 00:30:13.364 11:42:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.364 11:42:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:13.364 11:42:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.364 11:42:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:13.364 [2024-11-20 11:42:58.816242] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:13.364 11:42:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.364 11:42:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:30:13.364 11:42:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.364 11:42:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:13.364 11:42:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.364 11:42:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:30:13.364 11:42:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.364 11:42:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:13.364 11:42:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.365 11:42:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:30:13.365 11:42:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.365 11:42:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:13.365 [2024-11-20 11:42:58.856479] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:30:13.365 11:42:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.365 11:42:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:30:13.365 11:42:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:30:13.365 11:42:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:30:13.365 11:42:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:30:13.365 11:42:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:30:13.365 11:42:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:30:13.365 11:42:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:30:13.365 11:42:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:30:13.365 11:42:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:30:13.365 11:42:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:13.365 11:42:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:30:13.365 11:42:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:13.365 11:42:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:30:13.365 11:42:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:13.365 11:42:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:30:13.365 11:42:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:13.365 11:42:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:30:13.365 11:42:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:13.365 11:42:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:13.365 11:42:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:13.365 11:42:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:16.649 Initializing NVMe Controllers 00:30:16.649 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:30:16.649 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:16.649 Initialization complete. Launching workers. 00:30:16.649 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10839, failed: 0 00:30:16.649 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1061, failed to submit 9778 00:30:16.649 success 769, unsuccessful 292, failed 0 00:30:16.649 11:43:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:16.649 11:43:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:19.944 Initializing NVMe Controllers 00:30:19.944 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:30:19.944 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:19.944 Initialization complete. Launching workers. 00:30:19.944 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 5920, failed: 0 00:30:19.944 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1253, failed to submit 4667 00:30:19.944 success 214, unsuccessful 1039, failed 0 00:30:20.202 11:43:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:20.202 11:43:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:23.490 Initializing NVMe Controllers 00:30:23.490 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:30:23.490 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:23.490 Initialization complete. Launching workers. 00:30:23.490 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 29391, failed: 0 00:30:23.490 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2660, failed to submit 26731 00:30:23.490 success 367, unsuccessful 2293, failed 0 00:30:23.490 11:43:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:30:23.490 11:43:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:23.490 11:43:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:23.490 11:43:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:23.490 11:43:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:30:23.490 11:43:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:23.490 11:43:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:24.434 11:43:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.434 11:43:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 109025 00:30:24.434 11:43:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 109025 ']' 00:30:24.434 11:43:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 109025 00:30:24.434 11:43:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:30:24.434 11:43:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:24.434 11:43:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 109025 00:30:24.434 11:43:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:24.434 11:43:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:24.434 killing process with pid 109025 00:30:24.434 11:43:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 109025' 00:30:24.434 11:43:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 109025 00:30:24.434 11:43:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 109025 00:30:24.434 00:30:24.434 real 0m11.167s 00:30:24.434 user 0m43.115s 00:30:24.434 sys 0m1.696s 00:30:24.434 11:43:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:24.434 11:43:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:24.434 ************************************ 00:30:24.434 END TEST spdk_target_abort 00:30:24.434 ************************************ 00:30:24.435 11:43:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:30:24.435 11:43:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:24.435 11:43:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:24.435 11:43:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:24.435 ************************************ 00:30:24.435 START TEST kernel_target_abort 00:30:24.435 ************************************ 00:30:24.435 11:43:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:30:24.435 11:43:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:30:24.435 11:43:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:30:24.435 11:43:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:24.435 11:43:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:24.435 11:43:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:24.435 11:43:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:24.435 11:43:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:24.435 11:43:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:24.435 11:43:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:24.435 11:43:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:24.435 11:43:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:24.435 11:43:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:30:24.435 11:43:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:30:24.435 11:43:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:30:24.435 11:43:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:24.435 11:43:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:24.435 11:43:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:30:24.435 11:43:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:30:24.435 11:43:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:30:24.435 11:43:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:30:24.435 11:43:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:30:24.435 11:43:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:25.003 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:25.003 Waiting for block devices as requested 00:30:25.003 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:30:25.003 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:30:25.003 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:30:25.003 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:30:25.003 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:30:25.003 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:30:25.003 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:30:25.003 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:30:25.262 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:30:25.262 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:30:25.262 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:30:25.262 No valid GPT data, bailing 00:30:25.262 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:30:25.262 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:30:25.262 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:30:25.262 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:30:25.262 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:30:25.262 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:30:25.262 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:30:25.262 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:30:25.262 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:30:25.262 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:30:25.262 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:30:25.262 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:30:25.262 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:30:25.262 No valid GPT data, bailing 00:30:25.262 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:30:25.262 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:30:25.262 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:30:25.262 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:30:25.263 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:30:25.263 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:30:25.263 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:30:25.263 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:30:25.263 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:30:25.263 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:30:25.263 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:30:25.263 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:30:25.263 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:30:25.263 No valid GPT data, bailing 00:30:25.263 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:30:25.263 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:30:25.263 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:30:25.263 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:30:25.263 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:30:25.263 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:30:25.263 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:30:25.263 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:30:25.263 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:30:25.263 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:30:25.263 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:30:25.263 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:30:25.263 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:30:25.521 No valid GPT data, bailing 00:30:25.521 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:30:25.521 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:30:25.521 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:30:25.521 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:30:25.521 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:30:25.521 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:25.521 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:25.521 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:30:25.522 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:30:25.522 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:30:25.522 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:30:25.522 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:30:25.522 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:30:25.522 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:30:25.522 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:30:25.522 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:30:25.522 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:30:25.522 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a --hostid=806d442c-08ba-4719-b76d-33169283459a -a 10.0.0.1 -t tcp -s 4420 00:30:25.522 00:30:25.522 Discovery Log Number of Records 2, Generation counter 2 00:30:25.522 =====Discovery Log Entry 0====== 00:30:25.522 trtype: tcp 00:30:25.522 adrfam: ipv4 00:30:25.522 subtype: current discovery subsystem 00:30:25.522 treq: not specified, sq flow control disable supported 00:30:25.522 portid: 1 00:30:25.522 trsvcid: 4420 00:30:25.522 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:30:25.522 traddr: 10.0.0.1 00:30:25.522 eflags: none 00:30:25.522 sectype: none 00:30:25.522 =====Discovery Log Entry 1====== 00:30:25.522 trtype: tcp 00:30:25.522 adrfam: ipv4 00:30:25.522 subtype: nvme subsystem 00:30:25.522 treq: not specified, sq flow control disable supported 00:30:25.522 portid: 1 00:30:25.522 trsvcid: 4420 00:30:25.522 subnqn: nqn.2016-06.io.spdk:testnqn 00:30:25.522 traddr: 10.0.0.1 00:30:25.522 eflags: none 00:30:25.522 sectype: none 00:30:25.522 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:30:25.522 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:30:25.522 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:30:25.522 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:30:25.522 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:30:25.522 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:30:25.522 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:30:25.522 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:30:25.522 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:30:25.522 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:25.522 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:30:25.522 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:25.522 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:30:25.522 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:25.522 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:30:25.522 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:25.522 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:30:25.522 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:25.522 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:25.522 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:25.522 11:43:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:28.807 Initializing NVMe Controllers 00:30:28.807 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:28.807 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:28.807 Initialization complete. Launching workers. 00:30:28.807 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 33419, failed: 0 00:30:28.807 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 33419, failed to submit 0 00:30:28.807 success 0, unsuccessful 33419, failed 0 00:30:28.807 11:43:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:28.807 11:43:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:32.152 Initializing NVMe Controllers 00:30:32.152 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:32.152 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:32.152 Initialization complete. Launching workers. 00:30:32.152 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 68747, failed: 0 00:30:32.152 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 29548, failed to submit 39199 00:30:32.152 success 0, unsuccessful 29548, failed 0 00:30:32.152 11:43:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:32.153 11:43:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:35.436 Initializing NVMe Controllers 00:30:35.436 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:35.436 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:35.436 Initialization complete. Launching workers. 00:30:35.436 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 81449, failed: 0 00:30:35.436 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 20330, failed to submit 61119 00:30:35.436 success 0, unsuccessful 20330, failed 0 00:30:35.436 11:43:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:30:35.436 11:43:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:30:35.436 11:43:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:30:35.436 11:43:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:35.436 11:43:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:35.436 11:43:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:30:35.436 11:43:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:35.436 11:43:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:30:35.436 11:43:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:30:35.436 11:43:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:30:35.694 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:37.600 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:30:37.868 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:30:37.868 00:30:37.868 real 0m13.323s 00:30:37.868 user 0m6.656s 00:30:37.868 sys 0m4.180s 00:30:37.868 11:43:23 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:37.868 11:43:23 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:37.868 ************************************ 00:30:37.868 END TEST kernel_target_abort 00:30:37.868 ************************************ 00:30:37.868 11:43:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:30:37.868 11:43:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:30:37.868 11:43:23 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:37.868 11:43:23 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:30:37.868 11:43:23 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:37.868 11:43:23 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:30:37.868 11:43:23 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:37.868 11:43:23 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:37.868 rmmod nvme_tcp 00:30:37.868 rmmod nvme_fabrics 00:30:37.868 rmmod nvme_keyring 00:30:37.868 11:43:23 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:37.868 11:43:23 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:30:37.868 11:43:23 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:30:37.868 11:43:23 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 109025 ']' 00:30:37.868 11:43:23 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 109025 00:30:37.868 11:43:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 109025 ']' 00:30:37.868 11:43:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 109025 00:30:37.868 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (109025) - No such process 00:30:37.868 Process with pid 109025 is not found 00:30:37.868 11:43:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 109025 is not found' 00:30:37.868 11:43:23 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:30:37.868 11:43:23 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:38.435 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:38.435 Waiting for block devices as requested 00:30:38.435 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:30:38.435 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:30:38.435 11:43:23 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:38.435 11:43:23 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:38.435 11:43:23 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:30:38.435 11:43:23 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:38.435 11:43:23 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:30:38.435 11:43:23 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:30:38.435 11:43:23 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:38.435 11:43:23 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:30:38.435 11:43:23 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:30:38.436 11:43:24 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:30:38.694 11:43:24 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:30:38.694 11:43:24 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:30:38.694 11:43:24 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:30:38.694 11:43:24 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:30:38.694 11:43:24 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:30:38.694 11:43:24 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:30:38.694 11:43:24 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:30:38.694 11:43:24 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:30:38.694 11:43:24 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:30:38.694 11:43:24 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:38.694 11:43:24 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:38.694 11:43:24 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:30:38.694 11:43:24 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:38.694 11:43:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:38.694 11:43:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:38.694 11:43:24 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:30:38.694 00:30:38.694 real 0m27.351s 00:30:38.694 user 0m50.877s 00:30:38.694 sys 0m7.316s 00:30:38.694 11:43:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:38.694 11:43:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:38.694 ************************************ 00:30:38.694 END TEST nvmf_abort_qd_sizes 00:30:38.694 ************************************ 00:30:38.694 11:43:24 -- spdk/autotest.sh@292 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:30:38.694 11:43:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:38.694 11:43:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:38.694 11:43:24 -- common/autotest_common.sh@10 -- # set +x 00:30:38.694 ************************************ 00:30:38.694 START TEST keyring_file 00:30:38.694 ************************************ 00:30:38.694 11:43:24 keyring_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:30:38.953 * Looking for test storage... 00:30:38.953 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:30:38.953 11:43:24 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:38.953 11:43:24 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:30:38.953 11:43:24 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:38.953 11:43:24 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:38.953 11:43:24 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:38.953 11:43:24 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:38.953 11:43:24 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:38.953 11:43:24 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:30:38.953 11:43:24 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:30:38.953 11:43:24 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:30:38.953 11:43:24 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:30:38.953 11:43:24 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:30:38.953 11:43:24 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:30:38.953 11:43:24 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:30:38.953 11:43:24 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:38.953 11:43:24 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:30:38.953 11:43:24 keyring_file -- scripts/common.sh@345 -- # : 1 00:30:38.953 11:43:24 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:38.953 11:43:24 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:38.953 11:43:24 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:30:38.953 11:43:24 keyring_file -- scripts/common.sh@353 -- # local d=1 00:30:38.953 11:43:24 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:38.953 11:43:24 keyring_file -- scripts/common.sh@355 -- # echo 1 00:30:38.953 11:43:24 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:30:38.953 11:43:24 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:30:38.953 11:43:24 keyring_file -- scripts/common.sh@353 -- # local d=2 00:30:38.953 11:43:24 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:38.953 11:43:24 keyring_file -- scripts/common.sh@355 -- # echo 2 00:30:38.953 11:43:24 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:30:38.953 11:43:24 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:38.953 11:43:24 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:38.953 11:43:24 keyring_file -- scripts/common.sh@368 -- # return 0 00:30:38.953 11:43:24 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:38.953 11:43:24 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:38.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:38.953 --rc genhtml_branch_coverage=1 00:30:38.953 --rc genhtml_function_coverage=1 00:30:38.953 --rc genhtml_legend=1 00:30:38.953 --rc geninfo_all_blocks=1 00:30:38.953 --rc geninfo_unexecuted_blocks=1 00:30:38.953 00:30:38.953 ' 00:30:38.953 11:43:24 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:38.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:38.953 --rc genhtml_branch_coverage=1 00:30:38.953 --rc genhtml_function_coverage=1 00:30:38.953 --rc genhtml_legend=1 00:30:38.953 --rc geninfo_all_blocks=1 00:30:38.953 --rc geninfo_unexecuted_blocks=1 00:30:38.953 00:30:38.953 ' 00:30:38.954 11:43:24 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:38.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:38.954 --rc genhtml_branch_coverage=1 00:30:38.954 --rc genhtml_function_coverage=1 00:30:38.954 --rc genhtml_legend=1 00:30:38.954 --rc geninfo_all_blocks=1 00:30:38.954 --rc geninfo_unexecuted_blocks=1 00:30:38.954 00:30:38.954 ' 00:30:38.954 11:43:24 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:38.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:38.954 --rc genhtml_branch_coverage=1 00:30:38.954 --rc genhtml_function_coverage=1 00:30:38.954 --rc genhtml_legend=1 00:30:38.954 --rc geninfo_all_blocks=1 00:30:38.954 --rc geninfo_unexecuted_blocks=1 00:30:38.954 00:30:38.954 ' 00:30:38.954 11:43:24 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:30:38.954 11:43:24 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:38.954 11:43:24 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:30:38.954 11:43:24 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:38.954 11:43:24 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:38.954 11:43:24 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:38.954 11:43:24 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:38.954 11:43:24 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:38.954 11:43:24 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:38.954 11:43:24 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:38.954 11:43:24 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:38.954 11:43:24 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:38.954 11:43:24 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:38.954 11:43:24 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:30:38.954 11:43:24 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=806d442c-08ba-4719-b76d-33169283459a 00:30:38.954 11:43:24 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:38.954 11:43:24 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:38.954 11:43:24 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:38.954 11:43:24 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:38.954 11:43:24 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:38.954 11:43:24 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:30:38.954 11:43:24 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:38.954 11:43:24 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:38.954 11:43:24 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:38.954 11:43:24 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.954 11:43:24 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.954 11:43:24 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.954 11:43:24 keyring_file -- paths/export.sh@5 -- # export PATH 00:30:38.954 11:43:24 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.954 11:43:24 keyring_file -- nvmf/common.sh@51 -- # : 0 00:30:38.954 11:43:24 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:38.954 11:43:24 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:38.954 11:43:24 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:38.954 11:43:24 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:38.954 11:43:24 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:38.954 11:43:24 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:38.954 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:38.954 11:43:24 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:38.954 11:43:24 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:38.954 11:43:24 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:38.954 11:43:24 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:30:38.954 11:43:24 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:30:38.954 11:43:24 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:30:38.954 11:43:24 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:30:38.954 11:43:24 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:30:38.954 11:43:24 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:30:38.954 11:43:24 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:30:38.954 11:43:24 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:30:38.954 11:43:24 keyring_file -- keyring/common.sh@17 -- # name=key0 00:30:38.954 11:43:24 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:30:38.954 11:43:24 keyring_file -- keyring/common.sh@17 -- # digest=0 00:30:38.954 11:43:24 keyring_file -- keyring/common.sh@18 -- # mktemp 00:30:38.954 11:43:24 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.8d5nMT04rX 00:30:38.954 11:43:24 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:30:38.954 11:43:24 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:30:38.954 11:43:24 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:30:38.954 11:43:24 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:30:38.954 11:43:24 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:30:38.954 11:43:24 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:30:38.954 11:43:24 keyring_file -- nvmf/common.sh@733 -- # python - 00:30:39.213 11:43:24 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.8d5nMT04rX 00:30:39.213 11:43:24 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.8d5nMT04rX 00:30:39.213 11:43:24 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.8d5nMT04rX 00:30:39.213 11:43:24 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:30:39.213 11:43:24 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:30:39.213 11:43:24 keyring_file -- keyring/common.sh@17 -- # name=key1 00:30:39.213 11:43:24 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:30:39.213 11:43:24 keyring_file -- keyring/common.sh@17 -- # digest=0 00:30:39.213 11:43:24 keyring_file -- keyring/common.sh@18 -- # mktemp 00:30:39.213 11:43:24 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.WOYDjndjO1 00:30:39.213 11:43:24 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:30:39.213 11:43:24 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:30:39.213 11:43:24 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:30:39.213 11:43:24 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:30:39.213 11:43:24 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:30:39.213 11:43:24 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:30:39.213 11:43:24 keyring_file -- nvmf/common.sh@733 -- # python - 00:30:39.213 11:43:24 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.WOYDjndjO1 00:30:39.213 11:43:24 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.WOYDjndjO1 00:30:39.213 11:43:24 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.WOYDjndjO1 00:30:39.213 11:43:24 keyring_file -- keyring/file.sh@30 -- # tgtpid=109942 00:30:39.213 11:43:24 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:39.213 11:43:24 keyring_file -- keyring/file.sh@32 -- # waitforlisten 109942 00:30:39.213 11:43:24 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 109942 ']' 00:30:39.213 11:43:24 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:39.213 11:43:24 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:39.213 11:43:24 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:39.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:39.213 11:43:24 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:39.213 11:43:24 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:39.213 [2024-11-20 11:43:24.665769] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:30:39.213 [2024-11-20 11:43:24.665866] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109942 ] 00:30:39.472 [2024-11-20 11:43:24.829660] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:39.472 [2024-11-20 11:43:24.871235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:39.472 11:43:25 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:39.472 11:43:25 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:30:39.472 11:43:25 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:30:39.472 11:43:25 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.472 11:43:25 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:39.731 [2024-11-20 11:43:25.063935] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:39.731 null0 00:30:39.731 [2024-11-20 11:43:25.095893] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:39.731 [2024-11-20 11:43:25.096081] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:39.731 11:43:25 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.731 11:43:25 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:30:39.731 11:43:25 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:30:39.731 11:43:25 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:30:39.731 11:43:25 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:39.731 11:43:25 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:39.731 11:43:25 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:39.731 11:43:25 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:39.731 11:43:25 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:30:39.731 11:43:25 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.731 11:43:25 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:39.731 [2024-11-20 11:43:25.123898] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:30:39.731 2024/11/20 11:43:25 error on JSON-RPC call, method: nvmf_subsystem_add_listener, params: map[listen_address:map[traddr:127.0.0.1 trsvcid:4420 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode0 secure_channel:%!s(bool=false)], err: error received for nvmf_subsystem_add_listener method, err: Code=-32602 Msg=Invalid parameters 00:30:39.731 request: 00:30:39.731 { 00:30:39.731 "method": "nvmf_subsystem_add_listener", 00:30:39.731 "params": { 00:30:39.731 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:30:39.731 "secure_channel": false, 00:30:39.731 "listen_address": { 00:30:39.731 "trtype": "tcp", 00:30:39.731 "traddr": "127.0.0.1", 00:30:39.731 "trsvcid": "4420" 00:30:39.731 } 00:30:39.731 } 00:30:39.731 } 00:30:39.731 Got JSON-RPC error response 00:30:39.731 GoRPCClient: error on JSON-RPC call 00:30:39.732 11:43:25 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:39.732 11:43:25 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:30:39.732 11:43:25 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:39.732 11:43:25 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:39.732 11:43:25 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:39.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:39.732 11:43:25 keyring_file -- keyring/file.sh@47 -- # bperfpid=109959 00:30:39.732 11:43:25 keyring_file -- keyring/file.sh@49 -- # waitforlisten 109959 /var/tmp/bperf.sock 00:30:39.732 11:43:25 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 109959 ']' 00:30:39.732 11:43:25 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:39.732 11:43:25 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:30:39.732 11:43:25 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:39.732 11:43:25 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:39.732 11:43:25 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:39.732 11:43:25 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:39.732 [2024-11-20 11:43:25.189590] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:30:39.732 [2024-11-20 11:43:25.189705] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109959 ] 00:30:39.990 [2024-11-20 11:43:25.344425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:39.990 [2024-11-20 11:43:25.386325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:39.990 11:43:25 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:39.990 11:43:25 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:30:39.990 11:43:25 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.8d5nMT04rX 00:30:39.990 11:43:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.8d5nMT04rX 00:30:40.246 11:43:25 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.WOYDjndjO1 00:30:40.246 11:43:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.WOYDjndjO1 00:30:40.811 11:43:26 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:30:40.811 11:43:26 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:30:40.811 11:43:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:40.811 11:43:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:40.811 11:43:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:41.069 11:43:26 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.8d5nMT04rX == \/\t\m\p\/\t\m\p\.\8\d\5\n\M\T\0\4\r\X ]] 00:30:41.069 11:43:26 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:30:41.069 11:43:26 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:30:41.069 11:43:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:41.069 11:43:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:41.069 11:43:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:41.326 11:43:26 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.WOYDjndjO1 == \/\t\m\p\/\t\m\p\.\W\O\Y\D\j\n\d\j\O\1 ]] 00:30:41.326 11:43:26 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:30:41.326 11:43:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:41.326 11:43:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:41.326 11:43:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:41.326 11:43:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:41.326 11:43:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:41.584 11:43:27 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:30:41.584 11:43:27 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:30:41.584 11:43:27 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:41.584 11:43:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:41.584 11:43:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:41.584 11:43:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:41.584 11:43:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:42.149 11:43:27 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:30:42.149 11:43:27 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:42.149 11:43:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:42.149 [2024-11-20 11:43:27.735057] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:42.406 nvme0n1 00:30:42.406 11:43:27 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:30:42.406 11:43:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:42.406 11:43:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:42.406 11:43:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:42.406 11:43:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:42.406 11:43:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:42.664 11:43:28 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:30:42.664 11:43:28 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:30:42.664 11:43:28 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:42.664 11:43:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:42.664 11:43:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:42.664 11:43:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:42.664 11:43:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:42.923 11:43:28 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:30:42.923 11:43:28 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:43.182 Running I/O for 1 seconds... 00:30:44.116 11311.00 IOPS, 44.18 MiB/s 00:30:44.116 Latency(us) 00:30:44.116 [2024-11-20T11:43:29.705Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:44.116 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:30:44.116 nvme0n1 : 1.01 11364.18 44.39 0.00 0.00 11233.25 6494.02 20256.58 00:30:44.116 [2024-11-20T11:43:29.705Z] =================================================================================================================== 00:30:44.116 [2024-11-20T11:43:29.705Z] Total : 11364.18 44.39 0.00 0.00 11233.25 6494.02 20256.58 00:30:44.116 { 00:30:44.116 "results": [ 00:30:44.116 { 00:30:44.116 "job": "nvme0n1", 00:30:44.116 "core_mask": "0x2", 00:30:44.116 "workload": "randrw", 00:30:44.116 "percentage": 50, 00:30:44.116 "status": "finished", 00:30:44.116 "queue_depth": 128, 00:30:44.116 "io_size": 4096, 00:30:44.116 "runtime": 1.006584, 00:30:44.116 "iops": 11364.17825039937, 00:30:44.116 "mibps": 44.39132129062254, 00:30:44.116 "io_failed": 0, 00:30:44.116 "io_timeout": 0, 00:30:44.116 "avg_latency_us": 11233.245087857329, 00:30:44.116 "min_latency_us": 6494.021818181818, 00:30:44.116 "max_latency_us": 20256.581818181818 00:30:44.116 } 00:30:44.116 ], 00:30:44.116 "core_count": 1 00:30:44.116 } 00:30:44.116 11:43:29 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:30:44.116 11:43:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:30:44.375 11:43:29 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:30:44.375 11:43:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:44.375 11:43:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:44.375 11:43:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:44.375 11:43:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:44.375 11:43:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:44.636 11:43:30 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:30:44.636 11:43:30 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:30:44.636 11:43:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:44.636 11:43:30 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:44.909 11:43:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:44.909 11:43:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:44.909 11:43:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:45.167 11:43:30 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:30:45.167 11:43:30 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:45.167 11:43:30 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:30:45.167 11:43:30 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:45.167 11:43:30 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:30:45.167 11:43:30 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:45.167 11:43:30 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:30:45.167 11:43:30 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:45.167 11:43:30 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:45.167 11:43:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:45.426 [2024-11-20 11:43:30.840916] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:30:45.426 [2024-11-20 11:43:30.841549] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8e8fd0 (107): Transport endpoint is not connected 00:30:45.426 [2024-11-20 11:43:30.842531] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8e8fd0 (9): Bad file descriptor 00:30:45.426 [2024-11-20 11:43:30.843527] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:30:45.426 [2024-11-20 11:43:30.843645] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:30:45.426 [2024-11-20 11:43:30.843752] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:30:45.426 [2024-11-20 11:43:30.843837] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:30:45.426 2024/11/20 11:43:30 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:30:45.426 request: 00:30:45.426 { 00:30:45.426 "method": "bdev_nvme_attach_controller", 00:30:45.426 "params": { 00:30:45.426 "name": "nvme0", 00:30:45.426 "trtype": "tcp", 00:30:45.426 "traddr": "127.0.0.1", 00:30:45.426 "adrfam": "ipv4", 00:30:45.426 "trsvcid": "4420", 00:30:45.426 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:45.426 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:45.426 "prchk_reftag": false, 00:30:45.426 "prchk_guard": false, 00:30:45.426 "hdgst": false, 00:30:45.426 "ddgst": false, 00:30:45.426 "psk": "key1", 00:30:45.426 "allow_unrecognized_csi": false 00:30:45.426 } 00:30:45.426 } 00:30:45.426 Got JSON-RPC error response 00:30:45.426 GoRPCClient: error on JSON-RPC call 00:30:45.426 11:43:30 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:30:45.426 11:43:30 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:45.426 11:43:30 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:45.426 11:43:30 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:45.426 11:43:30 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:30:45.426 11:43:30 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:45.426 11:43:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:45.426 11:43:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:45.426 11:43:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:45.426 11:43:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:45.685 11:43:31 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:30:45.685 11:43:31 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:30:45.685 11:43:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:45.685 11:43:31 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:45.685 11:43:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:45.685 11:43:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:45.685 11:43:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:45.944 11:43:31 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:30:45.944 11:43:31 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:30:45.944 11:43:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:30:46.510 11:43:31 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:30:46.510 11:43:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:30:46.510 11:43:32 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:30:46.510 11:43:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:46.510 11:43:32 keyring_file -- keyring/file.sh@78 -- # jq length 00:30:47.076 11:43:32 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:30:47.076 11:43:32 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.8d5nMT04rX 00:30:47.076 11:43:32 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.8d5nMT04rX 00:30:47.076 11:43:32 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:30:47.076 11:43:32 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.8d5nMT04rX 00:30:47.076 11:43:32 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:30:47.076 11:43:32 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:47.076 11:43:32 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:30:47.077 11:43:32 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:47.077 11:43:32 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.8d5nMT04rX 00:30:47.077 11:43:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.8d5nMT04rX 00:30:47.335 [2024-11-20 11:43:32.720628] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.8d5nMT04rX': 0100660 00:30:47.335 [2024-11-20 11:43:32.721182] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:30:47.335 2024/11/20 11:43:32 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.8d5nMT04rX], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:30:47.335 request: 00:30:47.335 { 00:30:47.335 "method": "keyring_file_add_key", 00:30:47.335 "params": { 00:30:47.335 "name": "key0", 00:30:47.335 "path": "/tmp/tmp.8d5nMT04rX" 00:30:47.335 } 00:30:47.335 } 00:30:47.335 Got JSON-RPC error response 00:30:47.335 GoRPCClient: error on JSON-RPC call 00:30:47.335 11:43:32 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:30:47.335 11:43:32 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:47.335 11:43:32 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:47.335 11:43:32 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:47.335 11:43:32 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.8d5nMT04rX 00:30:47.335 11:43:32 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.8d5nMT04rX 00:30:47.335 11:43:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.8d5nMT04rX 00:30:47.593 11:43:33 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.8d5nMT04rX 00:30:47.593 11:43:33 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:30:47.593 11:43:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:47.593 11:43:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:47.593 11:43:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:47.593 11:43:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:47.593 11:43:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:48.159 11:43:33 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:30:48.159 11:43:33 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:48.159 11:43:33 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:30:48.159 11:43:33 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:48.159 11:43:33 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:30:48.159 11:43:33 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:48.159 11:43:33 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:30:48.159 11:43:33 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:48.159 11:43:33 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:48.159 11:43:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:48.417 [2024-11-20 11:43:33.788912] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.8d5nMT04rX': No such file or directory 00:30:48.417 [2024-11-20 11:43:33.789455] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:30:48.417 [2024-11-20 11:43:33.789589] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:30:48.417 [2024-11-20 11:43:33.789749] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:30:48.417 [2024-11-20 11:43:33.789882] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:48.417 [2024-11-20 11:43:33.790049] bdev_nvme.c:6764:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:30:48.417 2024/11/20 11:43:33 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-19 Msg=No such device 00:30:48.417 request: 00:30:48.417 { 00:30:48.417 "method": "bdev_nvme_attach_controller", 00:30:48.417 "params": { 00:30:48.417 "name": "nvme0", 00:30:48.417 "trtype": "tcp", 00:30:48.417 "traddr": "127.0.0.1", 00:30:48.417 "adrfam": "ipv4", 00:30:48.417 "trsvcid": "4420", 00:30:48.417 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:48.417 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:48.417 "prchk_reftag": false, 00:30:48.417 "prchk_guard": false, 00:30:48.417 "hdgst": false, 00:30:48.417 "ddgst": false, 00:30:48.417 "psk": "key0", 00:30:48.417 "allow_unrecognized_csi": false 00:30:48.417 } 00:30:48.417 } 00:30:48.417 Got JSON-RPC error response 00:30:48.417 GoRPCClient: error on JSON-RPC call 00:30:48.417 11:43:33 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:30:48.417 11:43:33 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:48.417 11:43:33 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:48.417 11:43:33 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:48.417 11:43:33 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:30:48.417 11:43:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:30:48.676 11:43:34 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:30:48.676 11:43:34 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:30:48.676 11:43:34 keyring_file -- keyring/common.sh@17 -- # name=key0 00:30:48.676 11:43:34 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:30:48.676 11:43:34 keyring_file -- keyring/common.sh@17 -- # digest=0 00:30:48.676 11:43:34 keyring_file -- keyring/common.sh@18 -- # mktemp 00:30:48.676 11:43:34 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.UVz3Q7Yylx 00:30:48.676 11:43:34 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:30:48.676 11:43:34 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:30:48.676 11:43:34 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:30:48.676 11:43:34 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:30:48.676 11:43:34 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:30:48.676 11:43:34 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:30:48.676 11:43:34 keyring_file -- nvmf/common.sh@733 -- # python - 00:30:48.676 11:43:34 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.UVz3Q7Yylx 00:30:48.676 11:43:34 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.UVz3Q7Yylx 00:30:48.676 11:43:34 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.UVz3Q7Yylx 00:30:48.676 11:43:34 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.UVz3Q7Yylx 00:30:48.676 11:43:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.UVz3Q7Yylx 00:30:49.242 11:43:34 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:49.242 11:43:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:49.501 nvme0n1 00:30:49.501 11:43:34 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:30:49.501 11:43:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:49.501 11:43:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:49.501 11:43:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:49.501 11:43:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:49.501 11:43:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:49.759 11:43:35 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:30:49.759 11:43:35 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:30:49.759 11:43:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:30:50.018 11:43:35 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:30:50.018 11:43:35 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:30:50.018 11:43:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:50.018 11:43:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:50.018 11:43:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:50.586 11:43:35 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:30:50.586 11:43:35 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:30:50.586 11:43:35 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:50.586 11:43:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:50.586 11:43:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:50.586 11:43:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:50.586 11:43:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:50.844 11:43:36 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:30:50.844 11:43:36 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:30:50.844 11:43:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:30:51.102 11:43:36 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:30:51.102 11:43:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:51.102 11:43:36 keyring_file -- keyring/file.sh@105 -- # jq length 00:30:51.364 11:43:36 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:30:51.364 11:43:36 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.UVz3Q7Yylx 00:30:51.364 11:43:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.UVz3Q7Yylx 00:30:51.931 11:43:37 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.WOYDjndjO1 00:30:51.931 11:43:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.WOYDjndjO1 00:30:52.190 11:43:37 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:52.190 11:43:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:52.448 nvme0n1 00:30:52.448 11:43:37 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:30:52.448 11:43:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:30:53.015 11:43:38 keyring_file -- keyring/file.sh@113 -- # config='{ 00:30:53.015 "subsystems": [ 00:30:53.015 { 00:30:53.015 "subsystem": "keyring", 00:30:53.015 "config": [ 00:30:53.015 { 00:30:53.015 "method": "keyring_file_add_key", 00:30:53.015 "params": { 00:30:53.015 "name": "key0", 00:30:53.015 "path": "/tmp/tmp.UVz3Q7Yylx" 00:30:53.015 } 00:30:53.015 }, 00:30:53.015 { 00:30:53.015 "method": "keyring_file_add_key", 00:30:53.015 "params": { 00:30:53.015 "name": "key1", 00:30:53.015 "path": "/tmp/tmp.WOYDjndjO1" 00:30:53.015 } 00:30:53.015 } 00:30:53.015 ] 00:30:53.015 }, 00:30:53.015 { 00:30:53.015 "subsystem": "iobuf", 00:30:53.015 "config": [ 00:30:53.015 { 00:30:53.015 "method": "iobuf_set_options", 00:30:53.015 "params": { 00:30:53.015 "enable_numa": false, 00:30:53.015 "large_bufsize": 135168, 00:30:53.015 "large_pool_count": 1024, 00:30:53.015 "small_bufsize": 8192, 00:30:53.015 "small_pool_count": 8192 00:30:53.015 } 00:30:53.015 } 00:30:53.015 ] 00:30:53.015 }, 00:30:53.015 { 00:30:53.015 "subsystem": "sock", 00:30:53.015 "config": [ 00:30:53.015 { 00:30:53.015 "method": "sock_set_default_impl", 00:30:53.015 "params": { 00:30:53.015 "impl_name": "posix" 00:30:53.015 } 00:30:53.015 }, 00:30:53.015 { 00:30:53.015 "method": "sock_impl_set_options", 00:30:53.015 "params": { 00:30:53.015 "enable_ktls": false, 00:30:53.015 "enable_placement_id": 0, 00:30:53.015 "enable_quickack": false, 00:30:53.015 "enable_recv_pipe": true, 00:30:53.015 "enable_zerocopy_send_client": false, 00:30:53.015 "enable_zerocopy_send_server": true, 00:30:53.015 "impl_name": "ssl", 00:30:53.015 "recv_buf_size": 4096, 00:30:53.015 "send_buf_size": 4096, 00:30:53.015 "tls_version": 0, 00:30:53.015 "zerocopy_threshold": 0 00:30:53.015 } 00:30:53.015 }, 00:30:53.015 { 00:30:53.015 "method": "sock_impl_set_options", 00:30:53.015 "params": { 00:30:53.015 "enable_ktls": false, 00:30:53.015 "enable_placement_id": 0, 00:30:53.015 "enable_quickack": false, 00:30:53.015 "enable_recv_pipe": true, 00:30:53.015 "enable_zerocopy_send_client": false, 00:30:53.015 "enable_zerocopy_send_server": true, 00:30:53.015 "impl_name": "posix", 00:30:53.015 "recv_buf_size": 2097152, 00:30:53.015 "send_buf_size": 2097152, 00:30:53.015 "tls_version": 0, 00:30:53.015 "zerocopy_threshold": 0 00:30:53.015 } 00:30:53.015 } 00:30:53.015 ] 00:30:53.015 }, 00:30:53.015 { 00:30:53.015 "subsystem": "vmd", 00:30:53.015 "config": [] 00:30:53.015 }, 00:30:53.015 { 00:30:53.015 "subsystem": "accel", 00:30:53.015 "config": [ 00:30:53.015 { 00:30:53.015 "method": "accel_set_options", 00:30:53.015 "params": { 00:30:53.015 "buf_count": 2048, 00:30:53.015 "large_cache_size": 16, 00:30:53.015 "sequence_count": 2048, 00:30:53.015 "small_cache_size": 128, 00:30:53.015 "task_count": 2048 00:30:53.015 } 00:30:53.015 } 00:30:53.015 ] 00:30:53.015 }, 00:30:53.015 { 00:30:53.015 "subsystem": "bdev", 00:30:53.015 "config": [ 00:30:53.015 { 00:30:53.015 "method": "bdev_set_options", 00:30:53.015 "params": { 00:30:53.015 "bdev_auto_examine": true, 00:30:53.015 "bdev_io_cache_size": 256, 00:30:53.015 "bdev_io_pool_size": 65535, 00:30:53.015 "iobuf_large_cache_size": 16, 00:30:53.015 "iobuf_small_cache_size": 128 00:30:53.015 } 00:30:53.015 }, 00:30:53.015 { 00:30:53.015 "method": "bdev_raid_set_options", 00:30:53.015 "params": { 00:30:53.015 "process_max_bandwidth_mb_sec": 0, 00:30:53.015 "process_window_size_kb": 1024 00:30:53.015 } 00:30:53.015 }, 00:30:53.015 { 00:30:53.015 "method": "bdev_iscsi_set_options", 00:30:53.015 "params": { 00:30:53.015 "timeout_sec": 30 00:30:53.015 } 00:30:53.015 }, 00:30:53.015 { 00:30:53.015 "method": "bdev_nvme_set_options", 00:30:53.016 "params": { 00:30:53.016 "action_on_timeout": "none", 00:30:53.016 "allow_accel_sequence": false, 00:30:53.016 "arbitration_burst": 0, 00:30:53.016 "bdev_retry_count": 3, 00:30:53.016 "ctrlr_loss_timeout_sec": 0, 00:30:53.016 "delay_cmd_submit": true, 00:30:53.016 "dhchap_dhgroups": [ 00:30:53.016 "null", 00:30:53.016 "ffdhe2048", 00:30:53.016 "ffdhe3072", 00:30:53.016 "ffdhe4096", 00:30:53.016 "ffdhe6144", 00:30:53.016 "ffdhe8192" 00:30:53.016 ], 00:30:53.016 "dhchap_digests": [ 00:30:53.016 "sha256", 00:30:53.016 "sha384", 00:30:53.016 "sha512" 00:30:53.016 ], 00:30:53.016 "disable_auto_failback": false, 00:30:53.016 "fast_io_fail_timeout_sec": 0, 00:30:53.016 "generate_uuids": false, 00:30:53.016 "high_priority_weight": 0, 00:30:53.016 "io_path_stat": false, 00:30:53.016 "io_queue_requests": 512, 00:30:53.016 "keep_alive_timeout_ms": 10000, 00:30:53.016 "low_priority_weight": 0, 00:30:53.016 "medium_priority_weight": 0, 00:30:53.016 "nvme_adminq_poll_period_us": 10000, 00:30:53.016 "nvme_error_stat": false, 00:30:53.016 "nvme_ioq_poll_period_us": 0, 00:30:53.016 "rdma_cm_event_timeout_ms": 0, 00:30:53.016 "rdma_max_cq_size": 0, 00:30:53.016 "rdma_srq_size": 0, 00:30:53.016 "reconnect_delay_sec": 0, 00:30:53.016 "timeout_admin_us": 0, 00:30:53.016 "timeout_us": 0, 00:30:53.016 "transport_ack_timeout": 0, 00:30:53.016 "transport_retry_count": 4, 00:30:53.016 "transport_tos": 0 00:30:53.016 } 00:30:53.016 }, 00:30:53.016 { 00:30:53.016 "method": "bdev_nvme_attach_controller", 00:30:53.016 "params": { 00:30:53.016 "adrfam": "IPv4", 00:30:53.016 "ctrlr_loss_timeout_sec": 0, 00:30:53.016 "ddgst": false, 00:30:53.016 "fast_io_fail_timeout_sec": 0, 00:30:53.016 "hdgst": false, 00:30:53.016 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:53.016 "multipath": "multipath", 00:30:53.016 "name": "nvme0", 00:30:53.016 "prchk_guard": false, 00:30:53.016 "prchk_reftag": false, 00:30:53.016 "psk": "key0", 00:30:53.016 "reconnect_delay_sec": 0, 00:30:53.016 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:53.016 "traddr": "127.0.0.1", 00:30:53.016 "trsvcid": "4420", 00:30:53.016 "trtype": "TCP" 00:30:53.016 } 00:30:53.016 }, 00:30:53.016 { 00:30:53.016 "method": "bdev_nvme_set_hotplug", 00:30:53.016 "params": { 00:30:53.016 "enable": false, 00:30:53.016 "period_us": 100000 00:30:53.016 } 00:30:53.016 }, 00:30:53.016 { 00:30:53.016 "method": "bdev_wait_for_examine" 00:30:53.016 } 00:30:53.016 ] 00:30:53.016 }, 00:30:53.016 { 00:30:53.016 "subsystem": "nbd", 00:30:53.016 "config": [] 00:30:53.016 } 00:30:53.016 ] 00:30:53.016 }' 00:30:53.016 11:43:38 keyring_file -- keyring/file.sh@115 -- # killprocess 109959 00:30:53.016 11:43:38 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 109959 ']' 00:30:53.016 11:43:38 keyring_file -- common/autotest_common.sh@958 -- # kill -0 109959 00:30:53.016 11:43:38 keyring_file -- common/autotest_common.sh@959 -- # uname 00:30:53.016 11:43:38 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:53.016 11:43:38 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 109959 00:30:53.016 killing process with pid 109959 00:30:53.016 Received shutdown signal, test time was about 1.000000 seconds 00:30:53.016 00:30:53.016 Latency(us) 00:30:53.016 [2024-11-20T11:43:38.605Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:53.016 [2024-11-20T11:43:38.605Z] =================================================================================================================== 00:30:53.016 [2024-11-20T11:43:38.605Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:53.016 11:43:38 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:53.016 11:43:38 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:53.016 11:43:38 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 109959' 00:30:53.016 11:43:38 keyring_file -- common/autotest_common.sh@973 -- # kill 109959 00:30:53.016 11:43:38 keyring_file -- common/autotest_common.sh@978 -- # wait 109959 00:30:53.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:53.016 11:43:38 keyring_file -- keyring/file.sh@118 -- # bperfpid=110441 00:30:53.016 11:43:38 keyring_file -- keyring/file.sh@120 -- # waitforlisten 110441 /var/tmp/bperf.sock 00:30:53.016 11:43:38 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 110441 ']' 00:30:53.016 11:43:38 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:53.016 11:43:38 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:30:53.016 11:43:38 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:53.016 11:43:38 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:53.016 11:43:38 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:30:53.016 "subsystems": [ 00:30:53.016 { 00:30:53.016 "subsystem": "keyring", 00:30:53.016 "config": [ 00:30:53.016 { 00:30:53.016 "method": "keyring_file_add_key", 00:30:53.016 "params": { 00:30:53.016 "name": "key0", 00:30:53.016 "path": "/tmp/tmp.UVz3Q7Yylx" 00:30:53.016 } 00:30:53.016 }, 00:30:53.016 { 00:30:53.016 "method": "keyring_file_add_key", 00:30:53.016 "params": { 00:30:53.016 "name": "key1", 00:30:53.016 "path": "/tmp/tmp.WOYDjndjO1" 00:30:53.016 } 00:30:53.016 } 00:30:53.016 ] 00:30:53.016 }, 00:30:53.016 { 00:30:53.016 "subsystem": "iobuf", 00:30:53.016 "config": [ 00:30:53.016 { 00:30:53.016 "method": "iobuf_set_options", 00:30:53.016 "params": { 00:30:53.016 "enable_numa": false, 00:30:53.016 "large_bufsize": 135168, 00:30:53.016 "large_pool_count": 1024, 00:30:53.016 "small_bufsize": 8192, 00:30:53.016 "small_pool_count": 8192 00:30:53.016 } 00:30:53.016 } 00:30:53.016 ] 00:30:53.016 }, 00:30:53.016 { 00:30:53.016 "subsystem": "sock", 00:30:53.016 "config": [ 00:30:53.016 { 00:30:53.016 "method": "sock_set_default_impl", 00:30:53.016 "params": { 00:30:53.016 "impl_name": "posix" 00:30:53.016 } 00:30:53.016 }, 00:30:53.016 { 00:30:53.016 "method": "sock_impl_set_options", 00:30:53.016 "params": { 00:30:53.016 "enable_ktls": false, 00:30:53.016 "enable_placement_id": 0, 00:30:53.016 "enable_quickack": false, 00:30:53.016 "enable_recv_pipe": true, 00:30:53.016 "enable_zerocopy_send_client": false, 00:30:53.016 "enable_zerocopy_send_server": true, 00:30:53.016 "impl_name": "ssl", 00:30:53.016 "recv_buf_size": 4096, 00:30:53.016 "send_buf_size": 4096, 00:30:53.016 "tls_version": 0, 00:30:53.016 "zerocopy_threshold": 0 00:30:53.016 } 00:30:53.016 }, 00:30:53.016 { 00:30:53.016 "method": "sock_impl_set_options", 00:30:53.016 "params": { 00:30:53.016 "enable_ktls": false, 00:30:53.016 "enable_placement_id": 0, 00:30:53.016 "enable_quickack": false, 00:30:53.016 "enable_recv_pipe": true, 00:30:53.016 "enable_zerocopy_send_client": false, 00:30:53.016 "enable_zerocopy_send_server": true, 00:30:53.016 "impl_name": "posix", 00:30:53.016 "recv_buf_size": 2097152, 00:30:53.016 "send_buf_size": 2097152, 00:30:53.016 "tls_version": 0, 00:30:53.016 "zerocopy_threshold": 0 00:30:53.016 } 00:30:53.016 } 00:30:53.016 ] 00:30:53.016 }, 00:30:53.016 { 00:30:53.016 "subsystem": "vmd", 00:30:53.016 "config": [] 00:30:53.016 }, 00:30:53.016 { 00:30:53.016 "subsystem": "accel", 00:30:53.016 "config": [ 00:30:53.016 { 00:30:53.016 "method": "accel_set_options", 00:30:53.016 "params": { 00:30:53.016 "buf_count": 2048, 00:30:53.016 "large_cache_size": 16, 00:30:53.016 "sequence_count": 2048, 00:30:53.016 "small_cache_size": 128, 00:30:53.016 "task_count": 2048 00:30:53.016 } 00:30:53.016 } 00:30:53.016 ] 00:30:53.016 }, 00:30:53.016 { 00:30:53.016 "subsystem": "bdev", 00:30:53.016 "config": [ 00:30:53.016 { 00:30:53.016 "method": "bdev_set_options", 00:30:53.016 "params": { 00:30:53.016 "bdev_auto_examine": true, 00:30:53.016 "bdev_io_cache_size": 256, 00:30:53.016 "bdev_io_pool_size": 65535, 00:30:53.016 "iobuf_large_cache_size": 16, 00:30:53.016 "iobuf_small_cache_size": 128 00:30:53.016 } 00:30:53.017 }, 00:30:53.017 { 00:30:53.017 "method": "bdev_raid_set_options", 00:30:53.017 "params": { 00:30:53.017 "process_max_bandwidth_mb_sec": 0, 00:30:53.017 "process_window_size_kb": 1024 00:30:53.017 } 00:30:53.017 }, 00:30:53.017 { 00:30:53.017 "method": "bdev_iscsi_set_options", 00:30:53.017 "params": { 00:30:53.017 "timeout_sec": 30 00:30:53.017 } 00:30:53.017 }, 00:30:53.017 { 00:30:53.017 "method": "bdev_nvme_set_options", 00:30:53.017 "params": { 00:30:53.017 "action_on_timeout": "none", 00:30:53.017 "allow_accel_sequence": false, 00:30:53.017 "arbitration_burst": 0, 00:30:53.017 "bdev_retry_count": 3, 00:30:53.017 "ctrlr_loss_timeout_sec": 0, 00:30:53.017 "delay_cmd_submit": true, 00:30:53.017 "dhchap_dhgroups": [ 00:30:53.017 "null", 00:30:53.017 "ffdhe2048", 00:30:53.017 "ffdhe3072", 00:30:53.017 "ffdhe4096", 00:30:53.017 "ffdhe6144", 00:30:53.017 "ffdhe8192" 00:30:53.017 ], 00:30:53.017 "dhchap_digests": [ 00:30:53.017 "sha256", 00:30:53.017 "sha384", 00:30:53.017 "sha512" 00:30:53.017 ], 00:30:53.017 "disable_auto_failback": false, 00:30:53.017 "fast_io_fail_timeout_sec": 0, 00:30:53.017 "generate_uuids": false, 00:30:53.017 "high_priority_weight": 0, 00:30:53.017 "io_path_stat": false, 00:30:53.017 "io_queue_requests": 512, 00:30:53.017 "keep_alive_timeout_ms": 10000, 00:30:53.017 "low_priority_weight": 0, 00:30:53.017 "medium_priority_weight": 0, 00:30:53.017 "nvme_adminq_poll_period_us": 10000, 00:30:53.017 "nvme_error_stat": false, 00:30:53.017 "nvme_ioq_poll_period_us": 0, 00:30:53.017 "rdma_cm_event_timeout_ms": 0, 00:30:53.017 "rdma_max_cq_size": 0, 00:30:53.017 "rdma_srq_size": 0, 00:30:53.017 "reconnect_delay_sec": 0, 00:30:53.017 "timeout_admin_us": 0, 00:30:53.017 "timeout_us": 0, 00:30:53.017 "transport_ack_timeout": 0, 00:30:53.017 "transport_retry_count": 4, 00:30:53.017 "transport_tos": 0 00:30:53.017 } 00:30:53.017 }, 00:30:53.017 { 00:30:53.017 "method": "bdev_nvme_attach_controller", 00:30:53.017 "params": { 00:30:53.017 "adrfam": "IPv4", 00:30:53.017 "ctrlr_loss_timeout_sec": 0, 00:30:53.017 "ddgst": false, 00:30:53.017 "fast_io_fail_timeout_sec": 0, 00:30:53.017 "hdgst": false, 00:30:53.017 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:53.017 "multipath": "multipath", 00:30:53.017 "name": "nvme0", 00:30:53.017 "prchk_guard": false, 00:30:53.017 "prchk_reftag": false, 00:30:53.017 "psk": "key0", 00:30:53.017 "reconnect_delay_sec": 0, 00:30:53.017 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:53.017 "traddr": "127.0.0.1", 00:30:53.017 "trsvcid": "4420", 00:30:53.017 "trtype": "TCP" 00:30:53.017 } 00:30:53.017 }, 00:30:53.017 { 00:30:53.017 "method": "bdev_nvme_set_hotplug", 00:30:53.017 "params": { 00:30:53.017 "enable": false, 00:30:53.017 "period_us": 100000 00:30:53.017 } 00:30:53.017 }, 00:30:53.017 { 00:30:53.017 "method": "bdev_wait_for_examine" 00:30:53.017 } 00:30:53.017 ] 00:30:53.017 }, 00:30:53.017 { 00:30:53.017 "subsystem": "nbd", 00:30:53.017 "config": [] 00:30:53.017 } 00:30:53.017 ] 00:30:53.017 }' 00:30:53.017 11:43:38 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:53.017 11:43:38 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:53.017 [2024-11-20 11:43:38.598648] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:30:53.017 [2024-11-20 11:43:38.598753] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110441 ] 00:30:53.275 [2024-11-20 11:43:38.748347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:53.275 [2024-11-20 11:43:38.785189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:53.534 [2024-11-20 11:43:38.930117] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:53.534 11:43:39 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:53.534 11:43:39 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:30:53.534 11:43:39 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:30:53.534 11:43:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:53.534 11:43:39 keyring_file -- keyring/file.sh@121 -- # jq length 00:30:54.102 11:43:39 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:30:54.102 11:43:39 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:30:54.102 11:43:39 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:54.102 11:43:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:54.102 11:43:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:54.102 11:43:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:54.102 11:43:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:54.360 11:43:39 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:30:54.360 11:43:39 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:30:54.360 11:43:39 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:54.360 11:43:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:54.360 11:43:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:54.360 11:43:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:54.360 11:43:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:54.619 11:43:40 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:30:54.619 11:43:40 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:30:54.619 11:43:40 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:30:54.619 11:43:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:30:54.877 11:43:40 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:30:54.877 11:43:40 keyring_file -- keyring/file.sh@1 -- # cleanup 00:30:54.877 11:43:40 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.UVz3Q7Yylx /tmp/tmp.WOYDjndjO1 00:30:54.877 11:43:40 keyring_file -- keyring/file.sh@20 -- # killprocess 110441 00:30:54.877 11:43:40 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 110441 ']' 00:30:54.877 11:43:40 keyring_file -- common/autotest_common.sh@958 -- # kill -0 110441 00:30:54.877 11:43:40 keyring_file -- common/autotest_common.sh@959 -- # uname 00:30:54.877 11:43:40 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:54.877 11:43:40 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 110441 00:30:54.877 11:43:40 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:54.877 11:43:40 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:54.878 killing process with pid 110441 00:30:54.878 11:43:40 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 110441' 00:30:54.878 11:43:40 keyring_file -- common/autotest_common.sh@973 -- # kill 110441 00:30:54.878 Received shutdown signal, test time was about 1.000000 seconds 00:30:54.878 00:30:54.878 Latency(us) 00:30:54.878 [2024-11-20T11:43:40.467Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:54.878 [2024-11-20T11:43:40.467Z] =================================================================================================================== 00:30:54.878 [2024-11-20T11:43:40.467Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:30:54.878 11:43:40 keyring_file -- common/autotest_common.sh@978 -- # wait 110441 00:30:55.137 11:43:40 keyring_file -- keyring/file.sh@21 -- # killprocess 109942 00:30:55.137 11:43:40 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 109942 ']' 00:30:55.137 11:43:40 keyring_file -- common/autotest_common.sh@958 -- # kill -0 109942 00:30:55.137 11:43:40 keyring_file -- common/autotest_common.sh@959 -- # uname 00:30:55.137 11:43:40 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:55.137 11:43:40 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 109942 00:30:55.137 11:43:40 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:55.137 11:43:40 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:55.137 killing process with pid 109942 00:30:55.137 11:43:40 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 109942' 00:30:55.137 11:43:40 keyring_file -- common/autotest_common.sh@973 -- # kill 109942 00:30:55.137 11:43:40 keyring_file -- common/autotest_common.sh@978 -- # wait 109942 00:30:55.396 00:30:55.396 real 0m16.572s 00:30:55.396 user 0m43.794s 00:30:55.396 sys 0m3.087s 00:30:55.396 11:43:40 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:55.396 11:43:40 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:55.396 ************************************ 00:30:55.396 END TEST keyring_file 00:30:55.396 ************************************ 00:30:55.396 11:43:40 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:30:55.396 11:43:40 -- spdk/autotest.sh@294 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:30:55.396 11:43:40 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:55.396 11:43:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:55.396 11:43:40 -- common/autotest_common.sh@10 -- # set +x 00:30:55.396 ************************************ 00:30:55.396 START TEST keyring_linux 00:30:55.396 ************************************ 00:30:55.396 11:43:40 keyring_linux -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:30:55.396 Joined session keyring: 446516256 00:30:55.396 * Looking for test storage... 00:30:55.656 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:30:55.656 11:43:40 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:55.656 11:43:40 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:55.656 11:43:40 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:30:55.656 11:43:41 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:55.656 11:43:41 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:55.656 11:43:41 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:55.656 11:43:41 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:55.656 11:43:41 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:30:55.656 11:43:41 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:30:55.656 11:43:41 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:30:55.656 11:43:41 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:30:55.656 11:43:41 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:30:55.656 11:43:41 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:30:55.656 11:43:41 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:30:55.656 11:43:41 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:55.656 11:43:41 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:30:55.656 11:43:41 keyring_linux -- scripts/common.sh@345 -- # : 1 00:30:55.656 11:43:41 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:55.656 11:43:41 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:55.656 11:43:41 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:30:55.656 11:43:41 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:30:55.656 11:43:41 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:55.656 11:43:41 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:30:55.656 11:43:41 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:30:55.656 11:43:41 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:30:55.656 11:43:41 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:30:55.656 11:43:41 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:55.656 11:43:41 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:30:55.656 11:43:41 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:30:55.656 11:43:41 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:55.656 11:43:41 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:55.656 11:43:41 keyring_linux -- scripts/common.sh@368 -- # return 0 00:30:55.656 11:43:41 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:55.656 11:43:41 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:55.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:55.656 --rc genhtml_branch_coverage=1 00:30:55.656 --rc genhtml_function_coverage=1 00:30:55.656 --rc genhtml_legend=1 00:30:55.656 --rc geninfo_all_blocks=1 00:30:55.657 --rc geninfo_unexecuted_blocks=1 00:30:55.657 00:30:55.657 ' 00:30:55.657 11:43:41 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:55.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:55.657 --rc genhtml_branch_coverage=1 00:30:55.657 --rc genhtml_function_coverage=1 00:30:55.657 --rc genhtml_legend=1 00:30:55.657 --rc geninfo_all_blocks=1 00:30:55.657 --rc geninfo_unexecuted_blocks=1 00:30:55.657 00:30:55.657 ' 00:30:55.657 11:43:41 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:55.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:55.657 --rc genhtml_branch_coverage=1 00:30:55.657 --rc genhtml_function_coverage=1 00:30:55.657 --rc genhtml_legend=1 00:30:55.657 --rc geninfo_all_blocks=1 00:30:55.657 --rc geninfo_unexecuted_blocks=1 00:30:55.657 00:30:55.657 ' 00:30:55.657 11:43:41 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:55.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:55.657 --rc genhtml_branch_coverage=1 00:30:55.657 --rc genhtml_function_coverage=1 00:30:55.657 --rc genhtml_legend=1 00:30:55.657 --rc geninfo_all_blocks=1 00:30:55.657 --rc geninfo_unexecuted_blocks=1 00:30:55.657 00:30:55.657 ' 00:30:55.657 11:43:41 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:30:55.657 11:43:41 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:55.657 11:43:41 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:30:55.657 11:43:41 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:55.657 11:43:41 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:55.657 11:43:41 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:55.657 11:43:41 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:55.657 11:43:41 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:55.657 11:43:41 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:55.657 11:43:41 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:55.657 11:43:41 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:55.657 11:43:41 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:55.657 11:43:41 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:55.657 11:43:41 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:806d442c-08ba-4719-b76d-33169283459a 00:30:55.657 11:43:41 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=806d442c-08ba-4719-b76d-33169283459a 00:30:55.657 11:43:41 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:55.657 11:43:41 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:55.657 11:43:41 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:55.657 11:43:41 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:55.657 11:43:41 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:55.657 11:43:41 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:30:55.657 11:43:41 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:55.657 11:43:41 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:55.657 11:43:41 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:55.657 11:43:41 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.657 11:43:41 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.657 11:43:41 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.657 11:43:41 keyring_linux -- paths/export.sh@5 -- # export PATH 00:30:55.657 11:43:41 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.657 11:43:41 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:30:55.657 11:43:41 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:55.657 11:43:41 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:55.657 11:43:41 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:55.657 11:43:41 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:55.657 11:43:41 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:55.657 11:43:41 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:55.657 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:55.657 11:43:41 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:55.657 11:43:41 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:55.657 11:43:41 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:55.657 11:43:41 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:30:55.657 11:43:41 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:30:55.657 11:43:41 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:30:55.657 11:43:41 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:30:55.657 11:43:41 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:30:55.657 11:43:41 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:30:55.657 11:43:41 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:30:55.657 11:43:41 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:30:55.657 11:43:41 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:30:55.657 11:43:41 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:30:55.657 11:43:41 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:30:55.657 11:43:41 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:30:55.657 11:43:41 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:30:55.657 11:43:41 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:30:55.657 11:43:41 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:30:55.657 11:43:41 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:30:55.657 11:43:41 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:30:55.657 11:43:41 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:30:55.657 11:43:41 keyring_linux -- nvmf/common.sh@733 -- # python - 00:30:55.657 11:43:41 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:30:55.657 /tmp/:spdk-test:key0 00:30:55.657 11:43:41 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:30:55.657 11:43:41 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:30:55.657 11:43:41 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:30:55.657 11:43:41 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:30:55.657 11:43:41 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:30:55.657 11:43:41 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:30:55.657 11:43:41 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:30:55.657 11:43:41 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:30:55.657 11:43:41 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:30:55.657 11:43:41 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:30:55.657 11:43:41 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:30:55.657 11:43:41 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:30:55.657 11:43:41 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:30:55.657 11:43:41 keyring_linux -- nvmf/common.sh@733 -- # python - 00:30:55.657 11:43:41 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:30:55.657 /tmp/:spdk-test:key1 00:30:55.657 11:43:41 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:30:55.657 11:43:41 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=110585 00:30:55.657 11:43:41 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:55.657 11:43:41 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 110585 00:30:55.657 11:43:41 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 110585 ']' 00:30:55.657 11:43:41 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:55.657 11:43:41 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:55.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:55.657 11:43:41 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:55.657 11:43:41 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:55.658 11:43:41 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:30:55.917 [2024-11-20 11:43:41.288899] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:30:55.917 [2024-11-20 11:43:41.289177] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110585 ] 00:30:55.917 [2024-11-20 11:43:41.438567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:55.917 [2024-11-20 11:43:41.472847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:56.176 11:43:41 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:56.176 11:43:41 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:30:56.176 11:43:41 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:30:56.176 11:43:41 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.176 11:43:41 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:30:56.176 [2024-11-20 11:43:41.658693] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:56.176 null0 00:30:56.176 [2024-11-20 11:43:41.690659] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:56.176 [2024-11-20 11:43:41.691012] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:56.176 11:43:41 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.176 11:43:41 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:30:56.176 96828550 00:30:56.176 11:43:41 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:30:56.176 437731012 00:30:56.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:56.176 11:43:41 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=110609 00:30:56.176 11:43:41 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 110609 /var/tmp/bperf.sock 00:30:56.176 11:43:41 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:30:56.176 11:43:41 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 110609 ']' 00:30:56.176 11:43:41 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:56.176 11:43:41 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:56.176 11:43:41 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:56.176 11:43:41 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:56.176 11:43:41 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:30:56.435 [2024-11-20 11:43:41.768467] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:30:56.435 [2024-11-20 11:43:41.768572] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110609 ] 00:30:56.435 [2024-11-20 11:43:41.918283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:56.435 [2024-11-20 11:43:41.958074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:56.694 11:43:42 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:56.694 11:43:42 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:30:56.694 11:43:42 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:30:56.694 11:43:42 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:30:57.016 11:43:42 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:30:57.016 11:43:42 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:57.275 11:43:42 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:30:57.275 11:43:42 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:30:57.533 [2024-11-20 11:43:43.017076] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:57.533 nvme0n1 00:30:57.533 11:43:43 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:30:57.533 11:43:43 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:30:57.533 11:43:43 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:30:57.533 11:43:43 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:30:57.533 11:43:43 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:30:57.533 11:43:43 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:58.100 11:43:43 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:30:58.100 11:43:43 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:30:58.100 11:43:43 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:30:58.100 11:43:43 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:30:58.100 11:43:43 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:58.100 11:43:43 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:30:58.100 11:43:43 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:58.359 11:43:43 keyring_linux -- keyring/linux.sh@25 -- # sn=96828550 00:30:58.359 11:43:43 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:30:58.359 11:43:43 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:30:58.359 11:43:43 keyring_linux -- keyring/linux.sh@26 -- # [[ 96828550 == \9\6\8\2\8\5\5\0 ]] 00:30:58.359 11:43:43 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 96828550 00:30:58.359 11:43:43 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:30:58.359 11:43:43 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:58.359 Running I/O for 1 seconds... 00:30:59.735 12508.00 IOPS, 48.86 MiB/s 00:30:59.735 Latency(us) 00:30:59.735 [2024-11-20T11:43:45.324Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:59.735 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:30:59.735 nvme0n1 : 1.01 12504.90 48.85 0.00 0.00 10177.99 8817.57 19779.96 00:30:59.735 [2024-11-20T11:43:45.324Z] =================================================================================================================== 00:30:59.735 [2024-11-20T11:43:45.324Z] Total : 12504.90 48.85 0.00 0.00 10177.99 8817.57 19779.96 00:30:59.735 { 00:30:59.735 "results": [ 00:30:59.735 { 00:30:59.735 "job": "nvme0n1", 00:30:59.735 "core_mask": "0x2", 00:30:59.735 "workload": "randread", 00:30:59.735 "status": "finished", 00:30:59.735 "queue_depth": 128, 00:30:59.735 "io_size": 4096, 00:30:59.735 "runtime": 1.010564, 00:30:59.735 "iops": 12504.898254835913, 00:30:59.735 "mibps": 48.847258807952784, 00:30:59.735 "io_failed": 0, 00:30:59.735 "io_timeout": 0, 00:30:59.735 "avg_latency_us": 10177.98732553037, 00:30:59.735 "min_latency_us": 8817.57090909091, 00:30:59.735 "max_latency_us": 19779.956363636364 00:30:59.735 } 00:30:59.735 ], 00:30:59.735 "core_count": 1 00:30:59.735 } 00:30:59.735 11:43:44 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:30:59.735 11:43:44 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:30:59.735 11:43:45 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:30:59.735 11:43:45 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:30:59.735 11:43:45 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:30:59.735 11:43:45 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:30:59.735 11:43:45 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:30:59.735 11:43:45 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:59.994 11:43:45 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:30:59.994 11:43:45 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:30:59.994 11:43:45 keyring_linux -- keyring/linux.sh@23 -- # return 00:30:59.994 11:43:45 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:30:59.994 11:43:45 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:30:59.994 11:43:45 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:30:59.994 11:43:45 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:30:59.994 11:43:45 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:59.994 11:43:45 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:31:00.254 11:43:45 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:00.254 11:43:45 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:31:00.254 11:43:45 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:31:00.513 [2024-11-20 11:43:45.915041] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:31:00.513 [2024-11-20 11:43:45.915667] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4f50 (107): Transport endpoint is not connected 00:31:00.513 [2024-11-20 11:43:45.916654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4f50 (9): Bad file descriptor 00:31:00.513 [2024-11-20 11:43:45.917651] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:31:00.513 [2024-11-20 11:43:45.917679] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:31:00.513 [2024-11-20 11:43:45.917690] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:31:00.513 [2024-11-20 11:43:45.917702] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:31:00.513 2024/11/20 11:43:45 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk::spdk-test:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:31:00.513 request: 00:31:00.513 { 00:31:00.513 "method": "bdev_nvme_attach_controller", 00:31:00.513 "params": { 00:31:00.513 "name": "nvme0", 00:31:00.513 "trtype": "tcp", 00:31:00.513 "traddr": "127.0.0.1", 00:31:00.513 "adrfam": "ipv4", 00:31:00.513 "trsvcid": "4420", 00:31:00.513 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:00.513 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:00.513 "prchk_reftag": false, 00:31:00.513 "prchk_guard": false, 00:31:00.513 "hdgst": false, 00:31:00.513 "ddgst": false, 00:31:00.513 "psk": ":spdk-test:key1", 00:31:00.513 "allow_unrecognized_csi": false 00:31:00.513 } 00:31:00.513 } 00:31:00.513 Got JSON-RPC error response 00:31:00.513 GoRPCClient: error on JSON-RPC call 00:31:00.513 11:43:45 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:31:00.513 11:43:45 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:00.513 11:43:45 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:00.513 11:43:45 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:00.513 11:43:45 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:31:00.513 11:43:45 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:31:00.513 11:43:45 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:31:00.513 11:43:45 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:31:00.513 11:43:45 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:31:00.513 11:43:45 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:31:00.513 11:43:45 keyring_linux -- keyring/linux.sh@33 -- # sn=96828550 00:31:00.513 11:43:45 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 96828550 00:31:00.513 1 links removed 00:31:00.513 11:43:45 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:31:00.513 11:43:45 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:31:00.513 11:43:45 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:31:00.513 11:43:45 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:31:00.513 11:43:45 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:31:00.513 11:43:45 keyring_linux -- keyring/linux.sh@33 -- # sn=437731012 00:31:00.513 11:43:45 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 437731012 00:31:00.513 1 links removed 00:31:00.513 11:43:45 keyring_linux -- keyring/linux.sh@41 -- # killprocess 110609 00:31:00.513 11:43:45 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 110609 ']' 00:31:00.513 11:43:45 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 110609 00:31:00.513 11:43:45 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:31:00.513 11:43:45 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:00.513 11:43:45 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 110609 00:31:00.513 11:43:45 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:00.513 killing process with pid 110609 00:31:00.513 11:43:45 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:00.513 11:43:45 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 110609' 00:31:00.513 Received shutdown signal, test time was about 1.000000 seconds 00:31:00.513 00:31:00.513 Latency(us) 00:31:00.513 [2024-11-20T11:43:46.102Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:00.513 [2024-11-20T11:43:46.102Z] =================================================================================================================== 00:31:00.513 [2024-11-20T11:43:46.102Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:00.513 11:43:45 keyring_linux -- common/autotest_common.sh@973 -- # kill 110609 00:31:00.513 11:43:45 keyring_linux -- common/autotest_common.sh@978 -- # wait 110609 00:31:00.772 11:43:46 keyring_linux -- keyring/linux.sh@42 -- # killprocess 110585 00:31:00.772 11:43:46 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 110585 ']' 00:31:00.772 11:43:46 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 110585 00:31:00.772 11:43:46 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:31:00.772 11:43:46 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:00.772 11:43:46 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 110585 00:31:00.772 11:43:46 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:00.772 killing process with pid 110585 00:31:00.772 11:43:46 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:00.772 11:43:46 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 110585' 00:31:00.772 11:43:46 keyring_linux -- common/autotest_common.sh@973 -- # kill 110585 00:31:00.772 11:43:46 keyring_linux -- common/autotest_common.sh@978 -- # wait 110585 00:31:01.030 00:31:01.030 real 0m5.524s 00:31:01.030 user 0m11.665s 00:31:01.030 sys 0m1.423s 00:31:01.030 11:43:46 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:01.030 ************************************ 00:31:01.030 END TEST keyring_linux 00:31:01.030 ************************************ 00:31:01.030 11:43:46 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:31:01.030 11:43:46 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:31:01.030 11:43:46 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:31:01.030 11:43:46 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:31:01.030 11:43:46 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:31:01.030 11:43:46 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:31:01.030 11:43:46 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:31:01.030 11:43:46 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:31:01.030 11:43:46 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:31:01.030 11:43:46 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:31:01.030 11:43:46 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:31:01.030 11:43:46 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:31:01.030 11:43:46 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:31:01.030 11:43:46 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:31:01.030 11:43:46 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:31:01.030 11:43:46 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:31:01.030 11:43:46 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:31:01.030 11:43:46 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:31:01.030 11:43:46 -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:01.030 11:43:46 -- common/autotest_common.sh@10 -- # set +x 00:31:01.030 11:43:46 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:31:01.030 11:43:46 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:31:01.030 11:43:46 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:31:01.030 11:43:46 -- common/autotest_common.sh@10 -- # set +x 00:31:02.930 INFO: APP EXITING 00:31:02.930 INFO: killing all VMs 00:31:02.930 INFO: killing vhost app 00:31:02.930 INFO: EXIT DONE 00:31:03.497 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:03.497 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:31:03.497 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:31:04.068 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:04.068 Cleaning 00:31:04.068 Removing: /var/run/dpdk/spdk0/config 00:31:04.068 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:31:04.068 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:31:04.068 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:31:04.068 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:31:04.068 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:31:04.068 Removing: /var/run/dpdk/spdk0/hugepage_info 00:31:04.068 Removing: /var/run/dpdk/spdk1/config 00:31:04.068 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:31:04.068 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:31:04.068 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:31:04.068 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:31:04.068 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:31:04.068 Removing: /var/run/dpdk/spdk1/hugepage_info 00:31:04.068 Removing: /var/run/dpdk/spdk2/config 00:31:04.068 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:31:04.327 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:31:04.327 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:31:04.327 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:31:04.327 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:31:04.327 Removing: /var/run/dpdk/spdk2/hugepage_info 00:31:04.327 Removing: /var/run/dpdk/spdk3/config 00:31:04.327 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:31:04.327 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:31:04.327 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:31:04.327 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:31:04.327 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:31:04.327 Removing: /var/run/dpdk/spdk3/hugepage_info 00:31:04.327 Removing: /var/run/dpdk/spdk4/config 00:31:04.327 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:31:04.327 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:31:04.327 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:31:04.327 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:31:04.327 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:31:04.327 Removing: /var/run/dpdk/spdk4/hugepage_info 00:31:04.327 Removing: /dev/shm/nvmf_trace.0 00:31:04.327 Removing: /dev/shm/spdk_tgt_trace.pid58635 00:31:04.327 Removing: /var/run/dpdk/spdk0 00:31:04.327 Removing: /var/run/dpdk/spdk1 00:31:04.327 Removing: /var/run/dpdk/spdk2 00:31:04.327 Removing: /var/run/dpdk/spdk3 00:31:04.327 Removing: /var/run/dpdk/spdk4 00:31:04.327 Removing: /var/run/dpdk/spdk_pid100536 00:31:04.327 Removing: /var/run/dpdk/spdk_pid100582 00:31:04.327 Removing: /var/run/dpdk/spdk_pid100925 00:31:04.327 Removing: /var/run/dpdk/spdk_pid100966 00:31:04.327 Removing: /var/run/dpdk/spdk_pid101362 00:31:04.327 Removing: /var/run/dpdk/spdk_pid101918 00:31:04.327 Removing: /var/run/dpdk/spdk_pid102349 00:31:04.327 Removing: /var/run/dpdk/spdk_pid103350 00:31:04.327 Removing: /var/run/dpdk/spdk_pid104374 00:31:04.327 Removing: /var/run/dpdk/spdk_pid104481 00:31:04.327 Removing: /var/run/dpdk/spdk_pid104544 00:31:04.327 Removing: /var/run/dpdk/spdk_pid106126 00:31:04.327 Removing: /var/run/dpdk/spdk_pid106439 00:31:04.327 Removing: /var/run/dpdk/spdk_pid106774 00:31:04.327 Removing: /var/run/dpdk/spdk_pid107318 00:31:04.327 Removing: /var/run/dpdk/spdk_pid107323 00:31:04.327 Removing: /var/run/dpdk/spdk_pid107722 00:31:04.327 Removing: /var/run/dpdk/spdk_pid107872 00:31:04.327 Removing: /var/run/dpdk/spdk_pid108024 00:31:04.327 Removing: /var/run/dpdk/spdk_pid108121 00:31:04.327 Removing: /var/run/dpdk/spdk_pid108271 00:31:04.327 Removing: /var/run/dpdk/spdk_pid108375 00:31:04.327 Removing: /var/run/dpdk/spdk_pid109086 00:31:04.327 Removing: /var/run/dpdk/spdk_pid109116 00:31:04.327 Removing: /var/run/dpdk/spdk_pid109150 00:31:04.327 Removing: /var/run/dpdk/spdk_pid109407 00:31:04.327 Removing: /var/run/dpdk/spdk_pid109441 00:31:04.327 Removing: /var/run/dpdk/spdk_pid109472 00:31:04.327 Removing: /var/run/dpdk/spdk_pid109942 00:31:04.327 Removing: /var/run/dpdk/spdk_pid109959 00:31:04.327 Removing: /var/run/dpdk/spdk_pid110441 00:31:04.327 Removing: /var/run/dpdk/spdk_pid110585 00:31:04.327 Removing: /var/run/dpdk/spdk_pid110609 00:31:04.327 Removing: /var/run/dpdk/spdk_pid58488 00:31:04.327 Removing: /var/run/dpdk/spdk_pid58635 00:31:04.327 Removing: /var/run/dpdk/spdk_pid58891 00:31:04.327 Removing: /var/run/dpdk/spdk_pid58978 00:31:04.327 Removing: /var/run/dpdk/spdk_pid59004 00:31:04.327 Removing: /var/run/dpdk/spdk_pid59113 00:31:04.327 Removing: /var/run/dpdk/spdk_pid59130 00:31:04.327 Removing: /var/run/dpdk/spdk_pid59264 00:31:04.327 Removing: /var/run/dpdk/spdk_pid59560 00:31:04.327 Removing: /var/run/dpdk/spdk_pid59744 00:31:04.327 Removing: /var/run/dpdk/spdk_pid59834 00:31:04.327 Removing: /var/run/dpdk/spdk_pid59915 00:31:04.327 Removing: /var/run/dpdk/spdk_pid59999 00:31:04.327 Removing: /var/run/dpdk/spdk_pid60042 00:31:04.327 Removing: /var/run/dpdk/spdk_pid60073 00:31:04.327 Removing: /var/run/dpdk/spdk_pid60137 00:31:04.327 Removing: /var/run/dpdk/spdk_pid60241 00:31:04.327 Removing: /var/run/dpdk/spdk_pid60894 00:31:04.327 Removing: /var/run/dpdk/spdk_pid60945 00:31:04.327 Removing: /var/run/dpdk/spdk_pid60995 00:31:04.327 Removing: /var/run/dpdk/spdk_pid61008 00:31:04.327 Removing: /var/run/dpdk/spdk_pid61083 00:31:04.327 Removing: /var/run/dpdk/spdk_pid61103 00:31:04.327 Removing: /var/run/dpdk/spdk_pid61182 00:31:04.327 Removing: /var/run/dpdk/spdk_pid61191 00:31:04.327 Removing: /var/run/dpdk/spdk_pid61248 00:31:04.327 Removing: /var/run/dpdk/spdk_pid61259 00:31:04.327 Removing: /var/run/dpdk/spdk_pid61311 00:31:04.327 Removing: /var/run/dpdk/spdk_pid61327 00:31:04.327 Removing: /var/run/dpdk/spdk_pid61468 00:31:04.327 Removing: /var/run/dpdk/spdk_pid61509 00:31:04.327 Removing: /var/run/dpdk/spdk_pid61586 00:31:04.327 Removing: /var/run/dpdk/spdk_pid62051 00:31:04.327 Removing: /var/run/dpdk/spdk_pid62429 00:31:04.585 Removing: /var/run/dpdk/spdk_pid64698 00:31:04.585 Removing: /var/run/dpdk/spdk_pid64749 00:31:04.586 Removing: /var/run/dpdk/spdk_pid65090 00:31:04.586 Removing: /var/run/dpdk/spdk_pid65131 00:31:04.586 Removing: /var/run/dpdk/spdk_pid65537 00:31:04.586 Removing: /var/run/dpdk/spdk_pid66131 00:31:04.586 Removing: /var/run/dpdk/spdk_pid66581 00:31:04.586 Removing: /var/run/dpdk/spdk_pid67614 00:31:04.586 Removing: /var/run/dpdk/spdk_pid68682 00:31:04.586 Removing: /var/run/dpdk/spdk_pid68799 00:31:04.586 Removing: /var/run/dpdk/spdk_pid68867 00:31:04.586 Removing: /var/run/dpdk/spdk_pid70474 00:31:04.586 Removing: /var/run/dpdk/spdk_pid70831 00:31:04.586 Removing: /var/run/dpdk/spdk_pid74645 00:31:04.586 Removing: /var/run/dpdk/spdk_pid75056 00:31:04.586 Removing: /var/run/dpdk/spdk_pid75674 00:31:04.586 Removing: /var/run/dpdk/spdk_pid76184 00:31:04.586 Removing: /var/run/dpdk/spdk_pid82203 00:31:04.586 Removing: /var/run/dpdk/spdk_pid82707 00:31:04.586 Removing: /var/run/dpdk/spdk_pid82817 00:31:04.586 Removing: /var/run/dpdk/spdk_pid82957 00:31:04.586 Removing: /var/run/dpdk/spdk_pid83015 00:31:04.586 Removing: /var/run/dpdk/spdk_pid83054 00:31:04.586 Removing: /var/run/dpdk/spdk_pid83093 00:31:04.586 Removing: /var/run/dpdk/spdk_pid83257 00:31:04.586 Removing: /var/run/dpdk/spdk_pid83398 00:31:04.586 Removing: /var/run/dpdk/spdk_pid83661 00:31:04.586 Removing: /var/run/dpdk/spdk_pid83777 00:31:04.586 Removing: /var/run/dpdk/spdk_pid84030 00:31:04.586 Removing: /var/run/dpdk/spdk_pid84123 00:31:04.586 Removing: /var/run/dpdk/spdk_pid84244 00:31:04.586 Removing: /var/run/dpdk/spdk_pid84620 00:31:04.586 Removing: /var/run/dpdk/spdk_pid85090 00:31:04.586 Removing: /var/run/dpdk/spdk_pid85091 00:31:04.586 Removing: /var/run/dpdk/spdk_pid85092 00:31:04.586 Removing: /var/run/dpdk/spdk_pid85369 00:31:04.586 Removing: /var/run/dpdk/spdk_pid85639 00:31:04.586 Removing: /var/run/dpdk/spdk_pid86046 00:31:04.586 Removing: /var/run/dpdk/spdk_pid86354 00:31:04.586 Removing: /var/run/dpdk/spdk_pid86937 00:31:04.586 Removing: /var/run/dpdk/spdk_pid86939 00:31:04.586 Removing: /var/run/dpdk/spdk_pid87341 00:31:04.586 Removing: /var/run/dpdk/spdk_pid87355 00:31:04.586 Removing: /var/run/dpdk/spdk_pid87375 00:31:04.586 Removing: /var/run/dpdk/spdk_pid87402 00:31:04.586 Removing: /var/run/dpdk/spdk_pid87411 00:31:04.586 Removing: /var/run/dpdk/spdk_pid87807 00:31:04.586 Removing: /var/run/dpdk/spdk_pid87850 00:31:04.586 Removing: /var/run/dpdk/spdk_pid88235 00:31:04.586 Removing: /var/run/dpdk/spdk_pid88468 00:31:04.586 Removing: /var/run/dpdk/spdk_pid88996 00:31:04.586 Removing: /var/run/dpdk/spdk_pid89607 00:31:04.586 Removing: /var/run/dpdk/spdk_pid91041 00:31:04.586 Removing: /var/run/dpdk/spdk_pid91678 00:31:04.586 Removing: /var/run/dpdk/spdk_pid91681 00:31:04.586 Removing: /var/run/dpdk/spdk_pid93736 00:31:04.586 Removing: /var/run/dpdk/spdk_pid93816 00:31:04.586 Removing: /var/run/dpdk/spdk_pid93893 00:31:04.586 Removing: /var/run/dpdk/spdk_pid93965 00:31:04.586 Removing: /var/run/dpdk/spdk_pid94097 00:31:04.586 Removing: /var/run/dpdk/spdk_pid94186 00:31:04.586 Removing: /var/run/dpdk/spdk_pid94260 00:31:04.586 Removing: /var/run/dpdk/spdk_pid94331 00:31:04.586 Removing: /var/run/dpdk/spdk_pid94722 00:31:04.586 Removing: /var/run/dpdk/spdk_pid95469 00:31:04.586 Removing: /var/run/dpdk/spdk_pid96858 00:31:04.586 Removing: /var/run/dpdk/spdk_pid97052 00:31:04.586 Removing: /var/run/dpdk/spdk_pid97325 00:31:04.586 Removing: /var/run/dpdk/spdk_pid97855 00:31:04.586 Removing: /var/run/dpdk/spdk_pid98208 00:31:04.586 Clean 00:31:04.586 11:43:50 -- common/autotest_common.sh@1453 -- # return 0 00:31:04.586 11:43:50 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:31:04.586 11:43:50 -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:04.586 11:43:50 -- common/autotest_common.sh@10 -- # set +x 00:31:04.844 11:43:50 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:31:04.844 11:43:50 -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:04.844 11:43:50 -- common/autotest_common.sh@10 -- # set +x 00:31:04.844 11:43:50 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:31:04.844 11:43:50 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:31:04.844 11:43:50 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:31:04.844 11:43:50 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:31:04.844 11:43:50 -- spdk/autotest.sh@398 -- # hostname 00:31:04.844 11:43:50 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:31:05.103 geninfo: WARNING: invalid characters removed from testname! 00:31:37.172 11:44:19 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:38.547 11:44:23 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:41.076 11:44:26 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:44.361 11:44:29 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:46.895 11:44:32 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:50.234 11:44:35 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:52.763 11:44:38 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:31:52.763 11:44:38 -- spdk/autorun.sh@1 -- $ timing_finish 00:31:52.763 11:44:38 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:31:52.763 11:44:38 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:31:52.764 11:44:38 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:31:52.764 11:44:38 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:31:52.764 + [[ -n 5259 ]] 00:31:52.764 + sudo kill 5259 00:31:52.772 [Pipeline] } 00:31:52.791 [Pipeline] // timeout 00:31:52.798 [Pipeline] } 00:31:52.814 [Pipeline] // stage 00:31:52.820 [Pipeline] } 00:31:52.835 [Pipeline] // catchError 00:31:52.846 [Pipeline] stage 00:31:52.849 [Pipeline] { (Stop VM) 00:31:52.864 [Pipeline] sh 00:31:53.147 + vagrant halt 00:31:57.346 ==> default: Halting domain... 00:32:02.623 [Pipeline] sh 00:32:02.986 + vagrant destroy -f 00:32:07.176 ==> default: Removing domain... 00:32:07.189 [Pipeline] sh 00:32:07.469 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/output 00:32:07.478 [Pipeline] } 00:32:07.494 [Pipeline] // stage 00:32:07.501 [Pipeline] } 00:32:07.517 [Pipeline] // dir 00:32:07.523 [Pipeline] } 00:32:07.540 [Pipeline] // wrap 00:32:07.546 [Pipeline] } 00:32:07.560 [Pipeline] // catchError 00:32:07.569 [Pipeline] stage 00:32:07.572 [Pipeline] { (Epilogue) 00:32:07.586 [Pipeline] sh 00:32:07.868 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:32:15.991 [Pipeline] catchError 00:32:15.993 [Pipeline] { 00:32:16.006 [Pipeline] sh 00:32:16.287 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:32:16.855 Artifacts sizes are good 00:32:16.864 [Pipeline] } 00:32:16.882 [Pipeline] // catchError 00:32:16.895 [Pipeline] archiveArtifacts 00:32:16.904 Archiving artifacts 00:32:17.015 [Pipeline] cleanWs 00:32:17.027 [WS-CLEANUP] Deleting project workspace... 00:32:17.027 [WS-CLEANUP] Deferred wipeout is used... 00:32:17.034 [WS-CLEANUP] done 00:32:17.036 [Pipeline] } 00:32:17.052 [Pipeline] // stage 00:32:17.057 [Pipeline] } 00:32:17.072 [Pipeline] // node 00:32:17.079 [Pipeline] End of Pipeline 00:32:17.123 Finished: SUCCESS